Older blog entries for mentifex (starting at number 95)

German Artificial Intelligence Programming Journal

Thurs.14.MAR.2013 -- Seeking Confirmation of Inference

In the German Wotan artificial intelligence with machine reasoning by inference, the AskUser module converts an otherwise silent inference into a yes-or-no question seeking confirmation of the inference with a yes-answer or refutation of the inference with a no-answer. Prior to confirmation or refutation, the conceptual engrams of the question are a mere proposition for consideration by the human user. When the user enters the answer, the KbRetro module must either establish associative tags from subject to verb to direct object in the case of a yes-answer, or disrupt the same tags with the insertion of a negational concept of "NICHT" for the idea known as "NOT" in English.

Fri.15.MAR.2013 -- Setting Parameters Properly

Although the AskUser module is asking the proper question, "HAT EVA EIN KIND" in German for "Does Eva have a child?", the concepts of the question are not being stored properly in the Psi conceptual array.

Sat.16.MAR.2013 -- Machine Learning by Inference

Now we have coordinated the operation of InFerence, AskUser and KbRetro. When we input, "eva ist eine frau" for "Eva is a woman," the German AI makes a silent inference that Eva may perhaps have a child. AskUser outputs the question, "HAT EVA EIN KIND" for "Does Eva have a child?" When we answer "nein" in German for English "no", the KbRetro module adjusts the knowledge base (KB) retroactively by negating the verb "HAT" and the German AI says, "EVA HAT NICHT EIN KIND", or "Eva does not have a child" in English.

German Artificial Intelligence Programming Journal

Sat.9.MAR.2013 -- Making Inferences in German

When the German Wotan AI uses the InFerence module to think rationally, the AI Mind creates a silent, conceptual inference and then calls the AskUser module to seek confirmation or refutation of the inference. While generating its output, the AskUser module calls the DeArticle module to insert a definite or indefinite article into the question being asked. The AI has been using the wrong article with "HAT EVA DAS KIND?" when it should be asking, "HAT EVA EIN KIND?" When we tweak the software to switch from the definite article to the indefinite article, the AI gets the gender wrong with "HAT EVA EINE KIND?"

Tues.12.MAR.2013 -- A Radical Departure

In the AskUser module, to put a German article before the direct object of the query, we may have to move the DeArticle call into the backwards search for the query-object (quobj), so that the gender of the query-object can be found and sent as a parameter into the DeArticle module.

It may seem like a radical departure to call DeArticle from inside the search-loop for a noun, but only one engram of the German noun will be retrieved, and so there should be no problem with inserting a German article at the same time. The necessary parameters are right there at the time-point from which the noun is being retrieved.

Wed.13.MAR.2013 -- Preventing False Parameters

When the OldConcept module recognizes a known German noun, normally the "mfn" gender of that noun is detected and stored once again as a fresh conceptual engram for that noun. However, today we have learned that in OldConcept we must store a zero value for the recognition of forms of "EIN" as the German indefinite article, because the word "EIN" has no intrinsic gender and only acquires the gender of its associated noun. When we insert the corrective code into the OldConcept module, finally we witness the German Wotan AI engaging in rational thought by means of inference when we input "eva ist eine frau", or "Eva is a woman." The German AI makes a silent inference about Eva and calls the AskUser module to ask us users, "HAT EVA EIN KIND", which means in English, "Does Eva have a child?" Next we must work on KbRetro to positively confirm or negatively adjust the knowledge base in accordance with the answer to the question.

German Artificial Intelligence Programming Journal

Wed.6.MAR.2013 -- Problems with the WhatBe Module

As we implement InFerence in the Wotan German Supercomputer AI, the program tends to call the WhatBe module to ask a question about a previously unknown word. When we input to the AI, "eva ist eine frau", first Wotan makes an inference about Eva and asks if Eva has a child. Then the AI mistakenly says, "WAS IRRTUM EVA" when the correct output should be "WAS IST EVA". This problem affords us an opportunity to improve the German performance of the WhatBe module which came into the German AI from the English MindForth AI.

First we need to determine which location in the AI source code is calling the WhatBe mind-module, and so we insert some diagnostics. Knowing where the call comes from, lets us work on the proper preparation of parameters from outside WhatBe to be used inside WhatBe.

Thurs.7.MAR.2013 -- Dealing with Number in German

We are learning that we must handle grammatical number much differently in the German AI than in the English AI. English generally uses the ending "-s" to indicate plural number, but in German there is no one such simple clue. In German we have a plethora of clues about number, and we can use the OutBuffer to work with some of them, such as "-heit" indicating singular and "-heiten" indicating plural. In German we can also establish priority among rules, such as letting an "-e" ending in the OutBuffer suggest a plural noun, while letting the discovery of a singular verb overrule the suggestion that a noun is in the plural. The main point here is that in German we must get away from the simplistic English rules about number.

Fri.8.MAR.2013 -- Removing Obsolete Influences

In NewConcept let us try changing the default expectation of number for a new noun from plural to singular. At first we notice no problem with a default singular. Then we notice that the InFerence module is using a default plural ("2") for the subject-noun of the silent inference. We tentatively change the default to singular ("1") until we can devise a more robust determinant of number in InFerence.

We are having a problem with the "ocn" variable for "old concept number". Just as with the obsolete "recnum", there is no reason any more to use the "ocn" variable, so we comment out some code.

German Artificial Intelligence Programming Journal

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

Sun.3.MAR.2013 -- Problems with AskUser

In our efforts to implement InFerence in the Wotan German AI, we have gotten the AI to stop asking "HABEN EVA KIND?" but now AskUser is outputting "HAT EVA DIE KIND" as if the German noun "Kind" for "child" were feminine instead of neuter. We should investigate to see if the DeArticle module has a problem.

Mon.4.MAR.2013 -- Problems with DeArticle

By the use of a diagnostic message, we have learned that the DeArticle module is finding the accusative plural "DIE" form without regard to what case is required. Now we need to coordinate DeArticle more with the AskUser module, so that when AskUser is seeking a direct object, so will DeArticle. There has already long been a "dirobj" flag, but it is perhaps time to use something more sophisticated, such as "dobcon" or even "acccon" for an accusative "statuscon". After a German preposition like "mit" or "bei" that requires the dative case, we may want to use a flag like "datcon" for a dative "statuscon". So perhaps now we should use "acccon" in preparation for using also "gencon" and "datcon" or maybe even "nomcon" for nominative.

Tues.5.MAR.2013 -- Coordinating AskUser and DeArticle

A better "statuscon" for coordinating between AskUser and DeArticle is "dbacon", because it can be used for all four declensional cases in German. When we use "dbacon" and when we make the "LEAVE" statement come immediately after the first instance of selecting an article with the correct "dbacon", we obtain "HAT EVA DAS KIND" as the question from AskUser after the input of "eva ist eine frau". We still need to take gender into account, so we may declare a variable of "mfncon" to coordinate searches for words having the correct gender.

German Artificial Intelligence Programming Journal

Thurs.31.JAN.2013 -- Troubleshooting the InFerence Module

Yesterday in the Wotan German AI we implemented the InFerence module from the English MindForth AI, but we need to continue troubleshooting the German AI functionality because the AI was creating silent inferences with only a subject and a verb but not yet the direct object of the verb.

Fri.1.FEB.2013 -- Asking Users to Confirm an Inference

The Wotan German AI seems to be inserting the wrong "tqv" retroactively across the boundary between sentences, when we type in "eva ist eine frau" in order to trigger an inference. A contributing factor is the code at the start of InStantiate which converts any zero "seqneed" to a 5=noun seqneed by default. It may be time to comment out that code. When we comment out the line setting "tqv" to 5=noun by default, suddenly the AI makes the correct silent inference, but we do not know if anything else has gone wrong that was depending on the line of code that has been commented out.

Then we discover that AskUser is not posing a question based on the silent inference because there is a left-over requirement for a plural noun-phrase number ("nphrnum"). When we comment out that requirement, as we did earlier in the English MindForth AI, we get not the ideal question of "HAT EVA EIN KIND?" but rather the faulty output of "HABEN EVA IRRTUM". This AskUser output is nevertheless gratifying and encouraging, because it reveals that a silent inference has been made, and that the German Wotan AI is trying to ask a yes-or-no question so that a human user will either confirm or refute the inference.


Memes of Russian Artificial Intelligence

Sun.14.OCT.2012 -- Restoring the Expression of Direct Objects

The Russian artificial intelligence (RuAi) is failing to say a direct object after some verbs. After manifold troubleshooting, we determined why VerbPhrase was not calling NounPhrase for the direct object. Finally we discovered that the word "YA" for "I" was being stored with a spurious "dba" of "4" as if it were an accusative direct object. At the start of InStantiate, we had to stop testing for merely a zero "seqneed" and test also for a "dirobj" flag set to one ("1") as a precondition for setting "seqneed" to five "5" for a noun or a pronoun -- still assuming that "seqneed" deals with either verbs or direct objects.

Mon.15.OCT.2012 -- Preventing Misrecognition of Verbs

The Russian AI is recording a known, second-person Russian verb as if it were a first-person form. Then errors creep in because the RuAi tries to say something in the first person but erroneously uses the second-person verb-form. As we troubleshoot, we discover that OldConcept is not recording the proper "dba" values for inflected case or grammatical person. Further troubleshooting reveals that OldConcept was searching backwards for the first instance of "oldpsi" and accepting its "dba" value, which is not trustworthy for verbs. We commented out the offending line of code, and the AI Mind in Russian stopped mixing up the grammatical persons. Onwards now to the Technological Singularity of memetic lore.


Artificial Intelligence in Russian

Sat.6.OCT.2012 -- Negation of Russian Verbs

In the free, open-source Russian artificial intelligence (RuAi), we need to work on the negation of verbs before we can implement the calling of the VisRecog module from the VerbPhrase module. When we type "Ty ne znayesh menya" for "You do not know me" into the current RuAi, it answers incorrectly "Ya znayu ne tebya" for "I do not know you," and the negational adverb "nye" for "not" is in the wrong place.

After experimentation with diagnostic "alert" messages, we moved the nay-saying code into the same area of VerbPhrase that uses parameters to select a Russian verb-form. Thus we got the RuAi to put the negational adverb before the verb and not after the verb.

Sun.7.OCT.2012 -- Negation of Putative Be-Verbs

Our next task in creating Russian artificial intelligence is to implement the negation of unexpressed, putative be-verbs in Russian. Currently the Dushka AI assumes tentatively the occurrence of a be-verb after any noun or pronoun begins a statement in the nominative. We need to introduce special handling of the negative adverb "NYE" so that the RuAi still waits for a putative be-verb. Although we are tempted to let there be a corrigend default negation of each putative be-verb, we owe it to Occam to let an actual negation determine what happens.



MindForth Programming Journal -- 2012 June 29

1Fri.29.JUN.2012 -- IdeaPlex: Sum of all Ideas

The sum of all ideas in a mind can be thought of as the IdeaPlex. These ideas are expressed in human language and are subject to modification or revision in the course of sensory engagement with the world at large.

The knowledge base (KB) in an AiMind is a subset of the IdeaPlex. Whereas the IdeaPlex is the sum totality of all the engrams of thought stored in the AI, the knowledge base is the distilled body of knowledge which can be expanded by means of inference with machine reasoning or extracted as responses to input-queries.

The job of a human programmer working as an AI mind-tender is to maintain the logical integrity of the machine IdeaPlex and therefore of the AI knowledge base. If the AI Mind is implanted in a humanoid robot, or is merely resident on a computer, it is the work of a roboticist to maintain the pathways of sensory input/output and the mechanisms of the robot motorium. The roboticist is concerned with hardware, and the mind-tender is concerned with the software of the IdeaPlex.

Whether the mind-tender is a software engineer or a hacker hired off the streets, the tender must monitor the current chain of thought in the machine intelligence and adjust the mental parameters of the AI so that all thinking is logical and rational, with no derailments of ideation into nonsense statements or absurdities of fallacy.

Evolution occurs narrowly and controllably in one artilect installation as the mind-tenders iron out bugs in the AI software and introduce algorithmic improvements. AI evolution explodes globally and uncontrollably when survival of the fittest AI Minds leads to a Technological Singularity.


2 Fri.29.JUN.2012 -- Perfecting the IdeaPlex

We may implement our new idea of faultlessizing the IdeaPlex by working on the mechanics of responding to an input-query such as "What do bears eat?" We envision the process as follows. The AI imparts extra activation to the verb "eat" from the query, perhaps first in the InStantiate module, but more definitely in the ReActivate module, which should be calling the SpreadAct module to send activation backwards to subjects and forwards to objects. Meanwhile, if not already, the query-input of the noun "bears" should be re-activating the concept of "bears" with only a normal activation. Ideas stored with the "triple" of "bears eat (whatever)" should then be ready for sentence-generation in response to the query. Neural inhibition should permit the generation of multiple responses, if they are available in the knowledge base.

During response-generation, we expect the subject-noun to use the verblock to lock onto its associated verb, which shall then use nounlock to lock onto the associated object. Thus the sentence is retrieved intact. (It may be necessary to create more "lock" variables for various parts of speech.)

We should perhaps use an input query of "What do kids make?", because MindForth already has the idea that "Kids make robots".


3 Sat.30.JUN.2012 -- Improving the SpreadAct Module

In our tentative coding, we need now to insert diagnostic messages that will announce each step being taken in the receipt and response to an input-query.

We discover some confusion taking place in the SpreadAct module, where "pre @ 0 > IF" serves as the test for performing a transfer of activation backwards to a "pre" concept. However, the "pre" item was replaced at one time with "prepsi", so apparently the backwards activation code is not being operated. We may need to test for a positive "prepsi" instead of a positive "pre".

We go into the local, pre-upload version of the Google Code MindForth "var" (variable) wiki-page and we add a description for "prepsi", since we are just now conducting serious business with the variable. Then in the MindForth SpreadAct module we switch from testing in vain for a positive "pre" value to testing for a positive "prepsi". Immediately our diagnostic messages indicate that, during generation of "KIDS MAKE ROBOTS" as a response, activation is passed backwards from the verb "MAKE" to the subject-noun "KIDS". However, SpreadAct does not seem to go into operation until the response is generated. We may need to have SpreadAct operate during the input of a verb as part of a query, in a chain were ReActivate calls SpreadAct to flush out potential subject-nouns by retro-activating them.


4 Sat.30.JUN.2012 -- Approaching the "seqneed" Problem

As we search back through versions of MindForth AI, we see that the 13 October 2010 MFPJ document describes our decision to stop having ReActivate call SpreadAct. Now we want to reinstate the calls, because we want to send activation backwards from heavily activated verbs to their subjects. Apparently the .psi position of the "seqpsi" has changed from position six to position seven, so we must change the ReActivate code accordingly. We make the change, and we observe that the input of "What do kids make?" causes the .psi line at time-point number 449 to show an increase in activation from 35 to 36 on the #72 KIDS concept. There is such a small increase from SpreadAct because SpreadAct conservatively imparts only one unit of activation backwards to the "prepsi" concept. If we have trouble making the correct subjects be chosen in response to queries, we could increase the backwards SpreadAct spikelet from one to a higher value.

Next we have a very tricky situation. When we ask, "What do kids make?", at first we get the correct answer of "Kids make robots." When we ask the same question again, we erroneously get, "Kids make kids." It used to be that such a problem was due to incorrect activation-levels, with the word "KIDS" being so highly activated that it was chosen erroneously for both subject and direct object. Nowadays we are starting with a subject-node and using "verblock" and "nounlock" to go unerringly from a node to its "seq" concept. However, in this current case we notice that the original input query of "What do kids make?" is being stored in the Psi array with an unwarranted seq-value of "72" for "KIDS" after the #73 "MAKE" verb. Such an erroneous setting seems to be causing the erroneous secondary output of "Kids make kids." It could be that the "moot" system is not working properly. The "moot" flag was supposed to prevent tags from being set during input queries.

In the InStantiate module, the "seqneed" code for verbs is causing the "MAKE" verb to receive an erroneous "seq" of #72 "KIDS". We may be able to modify the "seqneed" system to not install a "seq" at the end of an input.

When we increased the amount of time-points for the "seqneed" system to look backwards from two to eight, the system stopped assigning the spurious "seq" to the #73 verb "MAKE" at t=496 and instead assigned it to the #59 verb "DO" at t=486.


5 Sun.1.JUL.2012 -- Solving the "seqneed" Problem

After our coding session yesterday, we realized that the solution to the "seqneed" problem may lie in constraining the time period during which InStantiate searches backwards for a verb needing a "seq" noun. When we set up the "seqneed" mechanism, we rather naively ordained that the search should try to go all the way back to the "vault" value, relying on a "LEAVE" statement to abandon the loop after finding one verb that could take a "seq".

Now we have used a time-of-seqneed "tsn" variable to limit the backwards searches in the "seqneed" mechanism of the InStantiate module, and the MindForth AI seems to be functioning better than ever. Therefore we shall try to clean up our code by removing diagnostics and upload the latest MindForth AI to the Web.


Artificial Intelligence in Russian

1. Thurs.9.FEB.2012 -- Unspoken Be-Verbs as a Default

The Russian-speaking artificial intelligence Dushka needs a default BeVerb module that will silently assert itself as the automatic carrier of thought until a non-be-verb takes over from the provisional default. In our coding of a Russian mind, we will assume that any noun or pronoun, beginning a thought in the nominative case, is automatically the subject of a putative BeVerb until proven otherwise. In this way, our cognitive software will prepare for a BeVerb and switch automatically when a non-be-verb occurs.

We should work first on the comprehension of putative be-verbs and second on their generation, so that what we learn in comprehending be-verbs may be used in generating thoughts involving a BeVerb. So we type into the AI a Russian sentence to see if the software can understand it.

Human: душка робот

Robot: ДУШКА ЧТО ДУШКА ТАКОЕ

We said "Dushka is a robot" but the AI responded only, "Dushka -- what is Dushka?" We need to implement a default BeVerb in the comprehension of a sentence that lacks a visible BeVerb.

In the InStantiate module, we can trap for the input of a "c==32" space-bar when the "seqneed" is set to "8" for want of an incoming verb. We may then do something outrageous, but normal for Russian. From InStantiate we may provisionally send into AudMem a space-bar character with an "audpsi" of "800" for the verb БЫТЬ ("to be"), so that the AI is ready to record any noun coming in as a predicate nominative in conjunction with the be-verb. Now, if we implement such an outrageous step, it is possible that our AI memory-banks will become replete with quasi-spurious engrams of infinitive be-verbs that typically do not materialize. It could be that the presence of a spurious be-verb engram will not matter, if the cancellation of the default occurs as soon as some actual verb comes in. Then cancelling the spurious default will involve removing or nullifying any associative tags laid down momentarily during the enactment of the default.

2. Fri.10.FEB.2012 -- Instantiating Imaginary Be-Verbs

In the InStantiate module we will now experiment with code to create in auditory memory a pseudo-engram of a non-existent be-verb after the perception of a nominative noun or pronoun. Since the Russian-speaking mind waits for a predicate nominative, it needs at least an imaginary be-verb as the holder of associative links between subject and predicate nominative.

Now inside InStantiate we have assembled the code that creates a be-verb pseudo-engram in the three memory arrays for "Psi" concepts, Russian words and auditory engrams. The Psi node is automatically creating a "pre" tag that links the pseudo-verb back to its subject. We need to implement code that will finish the intermediation of the unspoken Russian BeVerb between its subject and the predicate nominative. The code must also cancel or uninstall the imaginary BeVerb if a real verb occurs instead of the provisionally expected BeVerb.

3. Sat.11.FEB.2012 -- Integration of Default Be-Verbs

We have the AI pretending that a BeVerb comes in after a nominative subject, and now we need to create the "seq" tag from the subject to the default BeVerb. First in the InStantiate module we insert a line of code declaring that the pseudo-be-verb is indeed a verb with respect to its part of speech, so that the following code will try to reach backwards to the subject engram and install a "seq" tag referring to the now not-so-imaginary BeVerb. We run the Dushka AI and we type in, ты робот -- which is Russian for "You are a robot", but without the be-verb. We are puzzled when Dushka answers, Я ЧТО Я ТАКОЕ ("I -- WHAT AM I?") and that's all she wrote. It may indicate that her concept of self has been activated by the input referring to "you", but she does not seem to have understood the input. We check the diagnostic display, and we see that her concept of self now has a "seq" tag referring right back to herself instead of to the default Russian BeVerb. What went wrong? We look at the JavaScript source code again, and we see that it was not enough to set the part-of-speech as a verb. We go ahead and we set the Psi concept-number to be that of the Russian be-verb. Then we run the Russian AI again with the same input and we sit there in shock when the AI announces to us: Я РОБОТ. Dushka has just said to us, "I AM A ROBOT" in Russian. From the diagnostic display we discover that the same changes that made Dushka able to understand the idea, made her able to think the idea.


Artificial Intelligence in Russian

Fri.3.FEB.2012 -- Recognizing Inflections

For the Russian-thinking Dushka AI Mind, we have perhaps stumbled upon a way to avoid the hard-coding of noun paradigms and instead to let the Russian AI learn the inflected endings of Russian nouns from its own experience. For example, right now the Russian artificial intelligence (RuAi) fails to recognize the Psi concept #501 БОГ in the following exchange.

Human: я уважаю бога ("I honor God.")
Robot: ТЫ УВАЖАЕШЬ БОГА ("You honor God.")

Robot: ЧТО БОГА ТАКОЕ ("What is God?")

The diagnostic display reveals that the software has almost recognized the word for God.

559. Б 0 * 1 1 0
560. О 0 * 0 1 0
561. Г 0 * 0 1 501
562. А 0 * 0 0 902
Aha! Suddenly it becomes clear that two things are happening. The Psi concept #501 is indeed being recognized at first, but perhaps the provisional-recognition "prc" variable is not being set, and so AudInput calls NewConcept as if the AI were learning a new word instead of recognizing an old word.

Sat.4.FEB.2012 -- Learning Russian Like a Human Child

Now in a very rough way we have trapped for "zad1" in the AudRecog module so as to recognize a noun (БОГА ) with one character of inflection added onto it. Because the noun was indeed recognized, the InStantiate "seqneed" mechanism tagged the noun in the "ruLexicon" with a "dba" of "4" to indicate a direct-object accusative case. In other words, the Russian AI learned a new noun-form as a human child would learn it, that is, from the speech patterns of another speaker of Russian.


86 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!