Recent blog entries for mentifex

Sat.27.AUG.2016 -- Creating the MindGrid trough of inhibition

In agi00031.F we are trying to figure out why we have lost the functionality of ending human input with a 13=CR and still getting a recognition of the final word of the input. We compare the current AudMem code with the agi00026.F version, and there does not seem to be any difference. Therefore the problem must probably lie in the major revisions made recently to the AudInput module.

From the diagnostic report messages that appear when we run the agi00031.F, it looks as though the 13=CR carriage return is not getting through from the AudInput module to the AudMem module. When we briefly insert a revealing diagnostic into the agi00026.F AudMem start, we see from "g AudMem: pho= 71" and "o AudMem: pho= 79" and "d AudMem: pho= 68" and "AudMem: pho= 13" that the carriage-return is indeed getting through. Therefore in AudInput we need to find a way of sending the final 13=CR into AudMem. Upshot: It turns out that in AudInput we only had to restore "pho @ 31 > pho @ 13 = OR IF \ 2016aug27: CR, SPACE or alphabetic letter" as a line of code that would let 13=CR be one of the conditions required for calling the AudMem module.

Next in the InStantiate module we need to remove a test that only lets words with a positive "rv" recall-vector get instantiated, because we must set "rv" to zero for personal pronouns being re-interpreted as "you" or "I" during communication with a human user. Apparently the Perlmind just ignores the engrams with a zero "rv" and finds the correct forms with a search based on parameters.

Now we would like to see how close we are to fulfilling all the conditions for a proper "trough" of inhibition in the AI MindGrid. When we run the ghost175.pl Perl AI and we enter "You know God," we see negative activations in thepresent-most trough of both the input and the concepts of "I HELP KIDS" as the output. In the Forth AGI, we wonder why do not see any negative activations in the present-most trough. Oh, we were not yet bothering to store the "act" activation-level in the Forth InStantiate module. We insert the missing necessary code, and we begin to see the trough of inhibition in both the recent-most input and the present-most output.

Visualizing the MindGrid as Theater of Neuronal Activations

Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost 174.pl, we have now commented out some code in the InStantiate() mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component
concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then
at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).

At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,

at t=2426, 707=YOU has a negative "-46" activation.

At t=2430, 820=SEE has a negative "-46" activation.

At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46"
points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while
we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since ghost162.pl was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of
thought of the AGI.

It was fun but nevertheless sincere to post AI Has Been Solved on April Fool's Day ten years ago. Mentifex Strong AI always was and always will be an extremely serious AI Lab Project as described in December of 1998 by the Association for Computing Machinery. Mentifex AI is so extremely serious that it has meanwhile been ported into Russian and into German. The resulting Amazon Kindle e-book, Artificial Intelligence in German, has been reviewed with the maximum highest-possible five-star rating. Another e-book, InFerence, describes how the Mentifex AI Minds can think by automated reasoning with logical inference. The MindForth AI prior art program has been cited in a Google patent. Now finally at http://ai.neocities.org/AiSteps.html a third-generation (3G) Mentifex AI Mind is being created in Perl, and Netizens from all over the world are looking into the use of Unicode and Perl to create artificial intelligence in any programming language and in any natural human language. Ladies and gentlemen, start your AI engines.


Artificial Intelligence in German (Amazon Kindle e-book)

If your humanoid robot needs an AI Mind to think in English or German, a new Amazon Kindle e-book goes into great detail about robotic thought processes.



This e-book in English about AI in German (and English and Russian) contains the entire AI source code in Forth, which causes most of the editorial portion of the e-book (18 of 20 chapters) to be readable without charge in the free preview.



InFerence
for Robot Artificial Intelligence (Mind-Module)

is now an Amazon Kindle e-book with a "Click to LOOK INSIDE!" free preview so that programmers and AI enthusiasts who may not have a credit card can get the gist of the information free from the product description and the first few chapters of the free preview. InFerence is available across the World Wide Web in Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, United Kingdom and USA America. So far the robot AI e-book has been reviewed with four stars out of five. The robot AI software is free to download in English, German and Russian.



64-bit Supercomputer Forth Chips for Strong AI

Imagine a four-core, 64-bit Forth AI CPU designed to run a not-quite-maspar but still somewhat parallel artificial intelligence in English http://www.scn.org/~mentifex/mindforth.txt or in http://www.scn.org/~mentifex/DeKi.txt German.

Such a specialized, Strong AI Forth CPU could devote one core to visual processing and memory; a second core to auditory input and memory; a third core to robotic motor memory and output; and a fourth core to automated reasoning with http://code.google.com/p/mindforth/wiki/InFerence in English, German or Russian.

The 64-bit Forth CPU could be architecturally simple by dint of leaving out all the customary circuitry used for floating-point arithmetic, and Forth would serve as its own AI operating system.

JavaScript Artificial Intelligence Programming Journal

Wed.3.APR.2013 -- "nounlock" May Not Need Parameters

In the English JSAI (JavaScript artificial intelligence), the "nounlock" variable holds onto the time-point of the direct object or predicate nominative for a specific verb. Since the auditory engram being fetched is already in the proper case, there may not be any need to specify any parameters during the search.

Fri.5.APR.2013 -- Orchestrating Flags in NounPhrase

As we run the English JSAI at length without human input and with the inclusion of diagnostic "alert" messages, we discover that the JSAI is sending a positive "dirobj" flag into NounPhrase without checking first for a positive "predflag".

Sat.6.APR.2013 -- Abandoning Obsolete Number Code

Yesterday we commented out NounPhrase code which was supposed to "make sure of agreement; 18may2011" but which was doing more harm than good. The code was causing the AI to send the wrong form of the self-concept "701=I" into the SpeechAct module. Now we can comment out our diagnostic "alert" messages and see if the free AI source code is stable enough for an upload to the Web. Yes, it is.

German Artificial Intelligence Programming Journal

Thurs.14.MAR.2013 -- Seeking Confirmation of Inference

In the German Wotan artificial intelligence with machine reasoning by inference, the AskUser module converts an otherwise silent inference into a yes-or-no question seeking confirmation of the inference with a yes-answer or refutation of the inference with a no-answer. Prior to confirmation or refutation, the conceptual engrams of the question are a mere proposition for consideration by the human user. When the user enters the answer, the KbRetro module must either establish associative tags from subject to verb to direct object in the case of a yes-answer, or disrupt the same tags with the insertion of a negational concept of "NICHT" for the idea known as "NOT" in English.

Fri.15.MAR.2013 -- Setting Parameters Properly

Although the AskUser module is asking the proper question, "HAT EVA EIN KIND" in German for "Does Eva have a child?", the concepts of the question are not being stored properly in the Psi conceptual array.

Sat.16.MAR.2013 -- Machine Learning by Inference

Now we have coordinated the operation of InFerence, AskUser and KbRetro. When we input, "eva ist eine frau" for "Eva is a woman," the German AI makes a silent inference that Eva may perhaps have a child. AskUser outputs the question, "HAT EVA EIN KIND" for "Does Eva have a child?" When we answer "nein" in German for English "no", the KbRetro module adjusts the knowledge base (KB) retroactively by negating the verb "HAT" and the German AI says, "EVA HAT NICHT EIN KIND", or "Eva does not have a child" in English.

German Artificial Intelligence Programming Journal

Sat.9.MAR.2013 -- Making Inferences in German

When the German Wotan AI uses the InFerence module to think rationally, the AI Mind creates a silent, conceptual inference and then calls the AskUser module to seek confirmation or refutation of the inference. While generating its output, the AskUser module calls the DeArticle module to insert a definite or indefinite article into the question being asked. The AI has been using the wrong article with "HAT EVA DAS KIND?" when it should be asking, "HAT EVA EIN KIND?" When we tweak the software to switch from the definite article to the indefinite article, the AI gets the gender wrong with "HAT EVA EINE KIND?"

Tues.12.MAR.2013 -- A Radical Departure

In the AskUser module, to put a German article before the direct object of the query, we may have to move the DeArticle call into the backwards search for the query-object (quobj), so that the gender of the query-object can be found and sent as a parameter into the DeArticle module.

It may seem like a radical departure to call DeArticle from inside the search-loop for a noun, but only one engram of the German noun will be retrieved, and so there should be no problem with inserting a German article at the same time. The necessary parameters are right there at the time-point from which the noun is being retrieved.

Wed.13.MAR.2013 -- Preventing False Parameters

When the OldConcept module recognizes a known German noun, normally the "mfn" gender of that noun is detected and stored once again as a fresh conceptual engram for that noun. However, today we have learned that in OldConcept we must store a zero value for the recognition of forms of "EIN" as the German indefinite article, because the word "EIN" has no intrinsic gender and only acquires the gender of its associated noun. When we insert the corrective code into the OldConcept module, finally we witness the German Wotan AI engaging in rational thought by means of inference when we input "eva ist eine frau", or "Eva is a woman." The German AI makes a silent inference about Eva and calls the AskUser module to ask us users, "HAT EVA EIN KIND", which means in English, "Does Eva have a child?" Next we must work on KbRetro to positively confirm or negatively adjust the knowledge base in accordance with the answer to the question.

German Artificial Intelligence Programming Journal

Wed.6.MAR.2013 -- Problems with the WhatBe Module

As we implement InFerence in the Wotan German Supercomputer AI, the program tends to call the WhatBe module to ask a question about a previously unknown word. When we input to the AI, "eva ist eine frau", first Wotan makes an inference about Eva and asks if Eva has a child. Then the AI mistakenly says, "WAS IRRTUM EVA" when the correct output should be "WAS IST EVA". This problem affords us an opportunity to improve the German performance of the WhatBe module which came into the German AI from the English MindForth AI.

First we need to determine which location in the AI source code is calling the WhatBe mind-module, and so we insert some diagnostics. Knowing where the call comes from, lets us work on the proper preparation of parameters from outside WhatBe to be used inside WhatBe.

Thurs.7.MAR.2013 -- Dealing with Number in German

We are learning that we must handle grammatical number much differently in the German AI than in the English AI. English generally uses the ending "-s" to indicate plural number, but in German there is no one such simple clue. In German we have a plethora of clues about number, and we can use the OutBuffer to work with some of them, such as "-heit" indicating singular and "-heiten" indicating plural. In German we can also establish priority among rules, such as letting an "-e" ending in the OutBuffer suggest a plural noun, while letting the discovery of a singular verb overrule the suggestion that a noun is in the plural. The main point here is that in German we must get away from the simplistic English rules about number.

Fri.8.MAR.2013 -- Removing Obsolete Influences

In NewConcept let us try changing the default expectation of number for a new noun from plural to singular. At first we notice no problem with a default singular. Then we notice that the InFerence module is using a default plural ("2") for the subject-noun of the silent inference. We tentatively change the default to singular ("1") until we can devise a more robust determinant of number in InFerence.

We are having a problem with the "ocn" variable for "old concept number". Just as with the obsolete "recnum", there is no reason any more to use the "ocn" variable, so we comment out some code.

93 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!