Recent blog entries for tampe

22 May 2015 (updated 22 May 2015 at 22:40 UTC) »

again forward chaining

Hi people!

I was playing with guile log and the prolog theirin to introduce forward chaining building up databases and lookup tables. So let's go on to a nice examples in graph theory.

Consider the problem with a huge graph, but the graph consists of clusters and don't have
much arrows going between the clusters. Also the number of clusters are not that large and the individual clusters are not that large. The task is to setup an effective system that calculates a maping from one node to the next globally if there is a chain linking them. So what you can do is to calculate a lookup table for the individual cluster and also a relational mapping of the cluster themslf. We also need to map the individual interface nodes.

The inteface of library(forward_chaining) is as follows. There is a directive set_trigger/1 that defines the name of the trigger function that will be calculated. Then this atom will be used in consequent rules defining a forwars chaining indicated with =f> as an operator that is similart to :- , --> etc in prolog. Also the mappings will be effectively stored in lookup tables in dynamic predicates, so one need to declare those as well, the prelude is therefore,


:- use_module(library(forward_chaining)).

:- set_trigger(t).

:- dynamic(arrow/2).
:- dynamic(parent/2).
:- dynamic(parrow/2).
:- dynamic(interface/4).

Now for the rules,


arrow(X,Y),parent(X,XX),parent(Y,YY) =f>
{XX==YY} -> parrow(X,Y) ;
(parrow(XX,YY),interface(X,Y,XX,YY)).

This rule will maintain databases arrow/2 of arrows introduced, parent/2 a database
of cluster relations and as a conequence if the clusters are the same make a parrow/2 relation
or parraw/2 and interface/4 relation. The parrow is goverend by the transitive law


parrow(X,Y),parrow(Y,X) =f> parrow(X,Z).

parrow(X,Y) will tell if Y can be gotten from X inside
the same cluster and
parrow(XX,YY) will tell if YY cluster can be gotten from the XX but not nessesary. (This is
used to cut off branches later)

That''s the forward chaining part, we make some custom functions to add data to the database e.g.


set_arrow(X,Y) :- fire(t,arrow(X,Y)).
set_parent(X,Y) :- fire(t,parent(X,Y)).

You issue these functions for each arrow relation and cluster relation in the system. And the databases will be setuped just fine through the triggering system inherent in forward chaining.

The meat


'x->y?'(X,Y) :-
parent(X,XX),parent(Y,YY),
XX== YY -> parrow(X,Y) ; i(X,Y,XX,YY).

this is plain backward chaining, not defining any databases. We just dispatch depending if the clusters are the same or not. If they are the same, it's a microsecond away in the lookup table of
parrow/2, else we dispatch to i. i is interesting, here it is:


i(X,Y,XX,YY) :-
parrow(XX,YY),
(
(interface(Z,W,XX,YY),parrow(X,Z),parrow(W,Y)) ;
interface(Z,W,XX,ZZ),parrow(X,Z),i(W,Y,ZZ,YY)
).

Well XX must relate to YY aka parrow/2. But that is just a rough estimate, a hash value, if they are the same we must do more work. we first try to go to an interface node directly from XX to YY via interface Z,W. for all of those we try to match a parrow/2 lookup as it is defined whithin the same cluster but that may fail and then we try to jump to via an intermediate cluster.

An lookup table for the whole global system is expensive memory wize and you easy blow
guile-log's limit of 10000 elements in the database. But the lookup tables for these systems are very hardly optimized for fast lookup. Now just doing the lookup tables for the individual clusters
will make it scalable for larger system then if these tricks where not used. I find this system is
a nice middle ground between creating gigantic lookup tables and do eveythng in searches that
can take quite some time.

have fun!!!

25 Apr 2015 (updated 25 Apr 2015 at 15:14 UTC) »

The escape of the batch curse

Consider the following problem, assume that we can generate two random sequences l1,l2 of numbers between 0 and 9, take a transform that to each number map it to the length of when it appears again modulo 10, call this map M. Max be the transform of a sequence by taking the max of the current value and the next. Let Plus be the summation of two such sequences modulo 10. we also assume that we now that the second sequence, l2 has the property that elementwize,


Max(M(l1)) .leq. Max(M(l2)),

how do we go about to generate


M(Plus(Max(M(l1)),Max(M(l2)))).

The idea of the solution I would like to play with is to generate a special variable, that when you create it, the value is not known, but you can place it in the right order and then when it's all it's dependants are available the result will be executed. I've played with these ideas a long time a ago here on this blog, but now there is the addition of backtracking that come into play and that we use guile-log and prolog. So what is the main trick that enables this.

Define two predicates, delay and force that is used as follows


plusz(Z,X,Y) :- delay(plusz(Z,X,Y),X,Y) ;
(ZZ is X + Y, force(Z,ZZ)

we want to take the addition of X and Y, if X and Y both have been forced dealy will fail, else it will delay the evaluation of plusz(Z,X,Y) and execute that function at the time when both have been forced, to put the value in Z we need to execute special code to force the value if Z as well have been blessed as a delayed value. That's it, its defined in about 50 rows of guile-log code, nothing huge.

The setup to generate sequence is to maintain state and define transforms that initiate the state and update the state, given such transforms one have enough to generate the sequence, so one need to make sense of the following ideoms


next(S,SS) :- ..
start(S) :- ..

Lets see how it can look for our example in prolog,


next_all2([Z1,Z2,Id,S1,S2,Z],[ZZ1,ZZ2,IId,SS1,SS2,ZZ]) :-
next_M(Z1,[R1,P1,C1|_],ZZ1),
next_M(Z2,[R2,P2,C2|_],ZZ2),
moving_op(2,maxz,0,U1,C1,S1,SS1),
moving_op(2,maxz,0,U2,C2,S2,SS2),
fail_if(P2,(U1 .leq. U2)),
plus10z(C, U1 ,U2),
next_M(Z,[_,_,CZ|_],ZZ),
plusz(IId,Id ,C),
writez(_,IId,C,R1,R2).



next_M(Z,X,ZZ)

next_M(Z,X,ZZ), will be the seuence M(l), e.g. it's a construct that generate state information Z->ZZ with the current value X=[l_i,M(l)_i,Redo ...], with l_i the i'th generated random value, M(l)_i the number of times it takes befor l_i appear again in the sequence modulo 10, and Redo will be the backtracking object so that everything restarts from the generation of random value l_i.


moving_op(N,Op,InitSeed,MaxRes,ValIn,S,SS)

N is the length of the window, Op is the reducing operator op(Z,X,Y), InitSeed is the initial value of the reduction. MaxRes is the current result of e.g. the max operation on the window perhaps delayed, ValIn is the value in the sequence S state in and SS is the state out.


fail_if(P,(U1 .leq. U2),U1,U2)

when U1 and U2 have been forced and U1


plus10z(C, U1 ,U2),


plus modulo 10 of Max(M(l1) and Max(M(l2))


plusz(IId,Id ,C),

This is a convolution of the generation of solutions C, the result IId_i will be non delayed if and only if when all C_k k


writez(_,IId,C,R1,R2).

Write out the result C and the R1 and R2 generated random valued for l1 and l2.

As you see this approach make sure the combination of values are in the right synchronization and that the solution allow to destruct the problem in reusable more abstract components that one quite easy sew together, that's the power of this idea, you want to change the algorithm, easy to do, the number of buggs will be smaller due to the composability of the approach, neat! Also this approach will be memory safe due to the neat gc that guile-log has regarding logical variables so everything would work on as long sequences that you are prepared to wait for.

Cheers!

26 Jan 2015 (updated 26 Jan 2015 at 18:24 UTC) »

Swipify of guile-log and theory

Most of my continuation of working with guile-log has payed itself off. I can now compile and run quite many modules from the open source swi prolog engine. Atm I'm working with getting clpfd integer solver over finite domains module working. It is not easy the compilation takes 5 minutes and you need to recompile to find bugs. But a solution will come in the end. The most difficult part was to get prolog macros e.g. expansion of prolog code using prolog hooks. The system was not designed for these, it was designed to use the scheme macro expansion. So the solution is a bit clumsy.

I think I managed to fix the negation issue with tablating and a ton of technology semantics is copied from the swi prolog engine. This means that in the next release you will get quite good possiblities to try out guile-log's features on your own prolog code. Oh the compilation is slow, but the actual execution of code is just 10x slower then swi prolog. This can be noticable, on the other hand the fetures you have can enable you to escape from exponential complexity issues so it might be a win still, especially if you can make use of guile-log's internal data structures.

I am following black light powers work to make use of Mills theory of everything concept, called GUTCP. He has a freely downloadable book, from his web page and I recommend anybody knowledgeable in math to sip through it. He does explain how everything works and deduces 100's of constants for atom and particle physics just out of the classical maxwell equation and the condition that there is a source field that hinders radiation to remove energy from particles. To see all this for yourself my general reading suggestion to verify the power of his theory is to start with the conclusion of the derivation of the g-factor, see wikipedia and work yourself towards the basic assumptions. The conclusion states that an amazing number of digits are correctly calculated just from assuming that the hydrogen atom consist of a photon in a resonant cavity together with a source terms at a spherical shell of a specific radius so that all balances and nothing is radiated to the outside. You may have physical objection originating from what you know about QM and the fundamentals of physics, but the truth is that there is no mathematical choice points or fudge factors and the math for this conclusion is quite ok to follow. Quite neat. What amazes me is how physicists seams to not being able to understand that for all this to be untrue one need to point out where the fudge factors are located and or any choice points. There is actually no need for physics, just mathematics to verify this. I could not find any traces of such tweaks and I expect that any objections should be submitted with a page number and equation number and the objection showing clearly that there is a choice point or fudge factor. Experts is so stuck with their theory that they backpedal and claim some abstract concept is violated originating from another theory, that they totally misses the obvious. You just can't fake that precision, there must be something buried in all this.

Oh so what is the origination of charge and mass. Well I've asked around and the best explanation is that there is something like a continental shifty in space at the source terms that hinders information to flow through the surface. If you think of a higgs like field, not interacting with itself, equal intensity in all direction, one could think that this higgs field is interacting at this surface in such a way that you get source terms in maxwells equation, it feels ok, also it is not unthinkable that this interaction maintains the continental drift and stable setup and nonradiation. The seamingly fine balance of Mills theory of the atom and particles is probably much more robust than the impression you get by reading it. So there is still bit's of theory to be developed to actually show this and this is probably what will happen if Mills manage to create energy by producing dark matter e.g. hydrinos. Else we will be stuck with false physics for decades and probably a great hindering of the only progress that can stop (at least temporarily) humanity to kill itself of the planet (if the environmentalists are right, and there is a big enough chance they are to take this seriously). So this is why i'm writing this, a true concern and also to share the joy of understanding the basics of the universe - it isn't that difficult after all.

28 Nov 2014 (updated 28 Nov 2014 at 23:10 UTC) »


If I'm going to do this an infinite number of times, I can just as well say I did success in doing so and get a good night sleep

I just released a new version of guile-log e.g. logic programming in guile scheme. This release has a few major improvement. The most noteworthy of them are

Support for tablating, e.g. prolog versions of memoisations. There are a few important facts to note First of all the memoisation means that many infinite recursions will success and you can get meaning full answer out of


f(X) :- f(X).

The meaning with memoisation is of cause if f is continued ad infinite and never binds X then f will succeed and X is not bound. This is a nice feature together with good support of recursive datastructures that is now included in guile-log. The other pecularity is that for a given input the code can yield many outputs via backtracking. So it is not a easy peasy thing to churn out. I am not by any means first in producing such a tablating system. The most interesting thing though is that the machinery to implement this was (almost) already there. And the solution is simply just a meta programming on those tools.

The system works by for each templated function have a functional hash from any input to a list of outputs that may not be a unique list. As new solutions are produced, the new solutions is consed on the list, in evaluating the function it will lookup the list of solutions and the produce them as answers backtrackingly. when all solutions have been produced, it will then lookup the functional datastructure again and see if there is any new solutions, if not it will store a continuation and then fail. There will be a base e.g. the first time the function is called that if all continuation points have failed restart all continuations, each of them will reevaluate if there is any new solutions to produce and if they all fail the next round a fixpoint is found an no new solutions is produced. Neat. Be careful with negation (do you know why). Let's show some prolog ...


memo.scm:
------------------------------------
(compile-prolog-string
"
-functorize(tabling).
ff(X) :- (X=[Y,A,B]),ff(A),ff(B),(Y=1;Y=2).
"
------------------------------------
scheme@(guile-user)> (use-modules (logic guile-log iso-prolog))
scheme@(guile-user)> (load "memo.scm")
scheme@(guile-user)> ,L prolog
Happy hacking with Prolog! To switch back, type `,L scheme'.
prolog@(guile-user)> .rec ff(X).

X = {0}[1, ref[0], ref[0]]
more (y/n/a/s) > s
prolog@(guile-user)> .10 .c

X = {0}[2, ref[0], ref[0]]

X = {0}[1, ref[0], {1}[1, ref[1], ref[1]]]

X = {0}[2, ref[0], {1}[1, ref[1], ref[1]]]

X = {0}[1, ref[0], {1}[2, ref[1], ref[1]]]

X = {0}[2, ref[0], {1}[2, ref[1], ref[1]]]

X = [1, {0}[1, ref[0], ref[0]], {1}[1, ref[1], ref[1]]]

X = [2, {0}[1, ref[0], ref[0]], {1}[1, ref[1], ref[1]]]

X = [1, {0}[1, ref[0], ref[0]], {1}[2, ref[1], ref[1]]]

X = [2, {0}[1, ref[0], ref[0]], {1}[2, ref[1], ref[1]]]

X = [1, {0}[1, ref[0], ref[0]], {1}[1, ref[1], {2}[1, ref[2], ref[2]]]]
$1 = stalled
prolog@(guile-user)>

Not the same solution can show up many times in this infinite list. It is possible to use tools that make sure the list is unique but that is expensive and is not shown here. Also note how one can issue an 's' and return to the guile prompt from where state management can be done as well as taking 10 values as shown above.

As shown above recursive aware unification as well as many other recursive aware operations can now be enabled via a prolog goal or the .rec switch at command line

A modified bdw-gc has been made (see the guile-log doc's) and code inside guile's C layer have enabled fully garbage collected prolog variables. Now most normal prolog code will be safe to use even in a server setup where you basically tail call forever and temporary bound variables will not blow the stack. This was a pretty difficult thing to get fully working. A really nice hack indeed.

swi prologs attributed variables and coroutines have been implemented at least partly and with some extra bells and whistles. This feature mean that you can hook in code that will be executed when functions are bounded to specific values or well just bounded, lookup these features in the swi-prolog manual if you are interested, pretty cool.

operator bindings are now name spaced, meaning that by importing a module operators can get a new meaning, this can be used to take advantage of guiles number tower and not adhere strictly to iso-prolog.

Ok there is a few more points in the release, download it and have a play. I'm basically the only user and implementor so it is only a cool alpha software. I'm now heading towards being able to compile at least parts of the swi prolog system, to get more testing and because it is a nice bite to chew on, getting good prolog compability regarding the module system and a few more points is the goal.


Happy hacking and have fun!

guile-log 0.4.1 released

I'm really proud of this release. It sports an implementation of a logic programming environment that's previously had a interface designed by myself and the famous kanren interface that you grok if you read the reasoned schemer. In this release a fairly complete implementation of an iso-prolog have been churn out. That was a huge effort, but the ride was interesting and gave me a lot of new insights in computer programming. This also sports proper namespace handling, proper closures, proper delimited continuation goals. the kanren interleaving constructs, a framework that is enabled by functional data-structures and state handling, vhashes, vlists, and the coolest thing of all you can save state and restore state quite cheaply and seamlessly and with great power if you learn the system. by seamlessly I mean that we do not have proper functional data structures everywhere due to semantic needs in especially accumulators and delimited continuations goals, and the logical variables may also be used in a mutative fashion for two reasons 1. to enable GC of prolog variables. 2 it is maybe 3-4 times faster compared to a vhash based version that is also possible. The vhash version is thread safe (I'm not using guile's internal vhash, but a modded version in C)
Anyhow to seamlessly handle state in all this is really a delicate affair. Cheaply refers to the fact that I tried hard to enable state storage and state retrieval in algorithms meaning that a save is much more intelligent than saving the whole state of the prolog engine. In all I strongly recommend anybody interesting in logic programming to study the features more deeply. I believe there is some good lessons to learn there. And finally by power I mean that the system has designed an internal tool that makes difficult algorithm possible.

Let's play with it


scheme@(guile-user)> ,L prolog
Happy hacking with Prolog! To switch back, type `,L scheme'.
prolog@(guile-user)> .[use-modules (logic guile-log iso-prolog)]
prolog@(guile-user)> .[use-modules (logic guile-log guile-prolog interpreter)]
prolog@(guile-user)> user_set(1,1),stall,user_set(1,2),stall.
stalled
/* We are at the first stall */
prolog@(guile-user)> .h

HELP FOR PROLOG COMMANDS
---------------------------------------------------------------------
(.n ) try to find n solutions
(.all | .* ) try to find all solutions
(.once | .1 ) try to find one solution
(.mute | .m ) no value output is written.
---------------------------------------------------------------------
(.save | .s ) associate current state with name ref
(.load | .l ) restore associate state with name ref
(.cont | .c ) continue the execution from last stall point
(.lold | .lo) restore the last state at a stall
(.clear ) clear the prolog stack and state
---------------------------------------------------------------------
(.ref ) get value of reference user variable ref
(.set ) set user variable ref to value val
---------------------------------------------------------------------
prolog@(guile-user)> .ref 1
$1 = 1
prolog@(guile-user)> .s 1
prolog@(guile-user)> .c
$2 = stalled
/* we are at the second stall */
prolog@(guile-user)> .ref 1
$3 = 2
prolog@(guile-user)> .s 2
prolog@(guile-user)> .c
yesmore (y/n/a) > n
$4 = ()
prolog@(guile-user)> .l 1
prolog@(guile-user)> .ref 1
$5 = 1
prolog@(guile-user)> .l 2
prolog@(guile-user)> .ref 1
$6 = 2
prolog@(guile-user)> .c
yesmore (y/n/a) > n
$7 = ()
prolog@(guile-user)>

To play with it checkout the v0.4.1 tag at guile-log and read the manual at manual

Have fun

Proluguroschlispymindy


I'm soon about to do a new release of guile-log - logic programming in guile scheme. This release sports delimeted continution goals, an iso-prolog interface (it already sports a kanren interface) and much more.

Why guile-log's prolog? Well, for one thing, if you want to program future emacs in prolog, the base is now there. But it do sports a few interesting ideoms of it's own. But also it's possible to treat a prolog module as a scheme module and reuse already written prolog in e.g. a kanren program and of cause vice versa. It has been really tough to match an iso prolog at a reasonable level, but we will probably not have a stable solution until quite a few releases ahead of us. Also there is not much of a community so if you are interested in any of these aspects above please try contact me at the guile-user mailing list.



Anyway this is all for your fun, so have fun!! Cheers!

We know everything, so why are you thinking mooron!

I have blogged a few times about fundamental physic insights, but there is possibly a better source, there is a free thinker regarding how the world is working, he is a heretic, he say that we all got our models backwards, that QM is a huge joke and that the world is much simpler than proclaimed in science shows and Universities throughout the world, or is he just a person that got tricked by a mathematical play, read on for the story.

The background is that for circa 20 years ago Randi Mills, the founder of BlackLightPower published his theory and has refined it ever since. I spend time reading his books and is really intrigued of how badly treated this material is both by himself and the scientific community as a whole. I know that many of you have read about BlackLightPower and the hydrino power they have so often claimed to produce. To be frank I find the hydrinos fishy and after I read his book I can actually point to weaknesses in Mill's proofs of their existence. But folks, hydrinos and antigravity is just a few sidetracks of the theory, that may be false, not because the theory is bad, but because the theory is missing parts and need to be complemented.

So what is the possible mistake of the assumption of a hydrino state. Well What Mills do is taking plain ol elecromagnetism, he set up a foton trapped by varying charge density in a sphere around the photon and than say that the system is a stable state if it does not radiate and deduces mathematically how this charge distribution look like. In Itself manage to do this is a great theoretical task and he should get a golden star for doing that, no one claims his been mathematically wrong in his solution(s). He then calculate the properties of these atoms and find out that it produces the same energy states and levels as normal quantum electrodynamics say. But not only this, he produces extremely accurate results for many body problems for basically all kinds of atoms and many molecules. Of cause Mills has great confidence in the theory after that feat. But now comes the sorrow part that probably shot down the whole effort and maybe have tricked Mills to haunt a gost for 20 years. He finds out states, hydrino states that is
less energetic states below the normal ground state in e.g. hydrogen. We should be able to produce these hydrino states and be able to harvest the energy from normal water basically for ever, welcome new free energy world. Is it so?, well as a mathematician I am picky with 'if' and 'if and only if'

Mills theory is interesting because it can give a very natural explanation how the world is working. Maxwell's equations for electromagnetism is what we call in mathematics linear, this means that we can superpose different solutions on top of each other and produce new solutions. It is also typical solution when disturbances from a steady state is small. So one would expect the Maxwell equations to be invalid for extreme cases where the disturbances are large. So what can happen when the equation breaks?, well one thing is that waves may start reflecting. It is interesting to note that in Mills solution the information flowing part of the solution at the surface is flowing tangential to the surface which is natural because the solution does not radiate. If we assume that it is the information flowing part that get mirrored at the surface we can imagine that the solution is stable if we deform it slightly. The actual dynamics of what happens if we disturb the system slightly is a key part to really say that a system is non radiating. And this analysis is missing in Mills theory of hydrinos. It can happen that these hydrinos will start radiate if it is disturbed a little and then because of this move even further away and then brake, if this is the case then these hydrino states will not be long lived and not represent a physical solution. So a physical solution does not radiate, but all states that does not radiate is not necessary a physical attainable state. So in
order to accept hydrinos I would need an 'if and only if' proof by Mills.

If you read about Mills theory and try to find critics of his theory, you may find:,

Hydrinos cannot exists because bla bla bla. Correct, but that does not invalidate the theory, it must be complemented.

It challanges QM, QM predicts everything quite well, it cannot work. - Well Mills theory is easier to work with and produces impressive results for especially atom physics, So QM and Mills theory are just two mathematical models of the same physics they are dual. Nowone have shown how this can be so, so why aren't bright theoreticians taking the task to explain it.

Mills talk about antigravity as a fifth force, sure it must be a fake. - Well he produces, is it 1500 pages of dense theory sure if you cherry pick, you can find strange things, but most of it seams to be correct.

Andreas Ratke says' his theory is not Lorenz invariant. Well
Andreas is trying to show this by showing that the charge density does not follow the wave equation. Well here is an issue with 'if and only if'. Solutions to the wave equation follows special relativity, but not all solutions to special relativity follows the wave equation, this critique is a really poor work and you can find quite many issues with it in Mills rebuttal to that paper.

In all I would ask any critique to concretely show how the hell Mill's fake all the calculations that is spot on, and not just mumble some abstract critic, that is really missing the point, Mills has by far not finished his work, that task is up to the physics community. Because reading his book all his amazing formulas seams to be based on just his basic assumption and than plain ol electromagnetism, am I right, or did I miss something, read it for yourself, have fun!

References:

wikipedia

the art of memorization


I read a book about human memory and memory techniques. Quite interesting and as an association of that memory I will in this post try to describe a nifty memory trick you can do in stack based prolog like engines.

The task is to make a kind of variable that behaves just like a variable but at a memorization of the current state these are automatically stored. We should try to not store too much information so we will need to have some mechanism that free memory in gc and selectively store stuff according to some intelligent scheme.

So let's start with a basic building block.


(define-syntax with-guarded-states
(lambda (x)
(syntax-case x ()
((_ guard ((s v) ...) code ...)
#'(let ((s v) ... (fr #t) (done #f))
(letrec ((guard (mk-guard *current-stack* fr done guard s ...)))
(dyn
(lambda ()
(set! fr #t)
(set! done #t)
(push-setup
*current-stack*
(lambda ()
(set! done #f))))
(lambda (x) (set! fr #f))
*current-stack*)
(let () code ...)))))))


with-guarded-states take a name for a guard and associate that with a set of variables s ... with initial values v .... The guard is active in the code section and it is a function that is used to set the variable s ... in the wanted manner. in the code we first initiate the variables with the initial value and initiate to flags fr, and done. we then define a guard function with the helper macro mk-guard which semantic will be described later. Then the function put's a dynwind dyn on the prolog stack. a dynwind has essentially two functions, one that is executed when the stack is reinstated and one where the stack is unwinded. Here at unwinding the fr flag is set to #f and at rewinding e.g. recalling a state, fr is set to #t and done is set to #t and we push a setup hook that set done to #f. The setup hook is called after the winding have been finished. the code is then executed in a let environment meaning that we are allowed to start the code with a set of defines. Anyway done is used to make sure we will update just ones if there is a sequence of guards evaluated and fr will mark that we are in the frame of the guarded variables and done will mark that we are hitting the first guarded set! in a wind/unwind.

the mk-guard is essentially the following


(lambda (ss ...)
(let ((so s) ...)
(set! s ss) ...
(dyn
rewind-code
unwind-code)))

So the generated guard will take the new data ss ... and then store the old state in so ..., then set the variables with the new data in (set! s ss) ... and push a dynwind on the stack that represents the memorization of the setting of the variables.
essentially when passing the dynwind in a unwind we will restore the old value, and when passing in a rewind we will restore the new value.

The rewind code and the unwind will be described next, we use,


(begin
(if (and (gp-wind-ref p) (not done))
(begin
(set! done #t)
(push-setup p
(let ((ss s) ...)
(lambda ()
(set! done #f)
(if fr
(guard ss ...)
(begin (set! s ss) ...)))))))
(set! s so) ...))

So here we see the done flag in action, if done is false e.g. it hits the variable for the first time then it will try execute the if, also we need gp-wind-ref to be true, e.g. we are saving the variable values, the if just marks done as #t so we will not do this again if we enters a new guarded set for the same variables. Again we push-setup e.g. put a hook to be executed at the finishing of the wind/unwind. so we store the current variables in ss, the lambda then will undo done and put it to #f, ready to be used at the next wind/unwind, it will then check if we are in the frame of the variable, if so we will make a new guard setting the s to the stored ss. If we have left the frame we will set the s, the net effect is that we have kept the initial value of the guarded variables intact but been able to correctly add, if needed, a guarded to do the correct transformations of the variable if we unwind or rewind over the new guard, in case of leaving the frame of the variables we have still kept the variable value, in case the there are no references to the variables all data will be reclaimed at perhaps the next gc. After the if part is done we simple restore to the old value so.


The code for the wind is very similar, again we will make sure to just execute the first encountered guard. The difference is that fr will always be true here. Also gp-wind-ref will flag if we shall keep the value of the special variable, or if we shall restore the old value. There is a problem with the previous code though one need to make sure not to grow the number of guards as one reinstate the new value, this is an optimization not solved yet, but the wanted semantic should be correct. A solution would be to at re-instation overwrite the last guard/wind refering to the variables to the new one or add a new one. Another optimization is to make sure that we do not add a null guard e.g. one that does not change anything. The deficiencies shown at winding back the stack does not look that good, but in practice the rewind is followed by a unwind what essentially keep the number of guard to say 1 or 2.

So this works pretty well it is not perfected but still a quite cool feature if one does not restore data to the same state to often without backtracking over the guard constructs.

Have fun!

One year of imagination and coding

This year I have accomplish much more than any year before, Maybe not so much is seen in the space of Openly sourced code, but quite a lot have been achieved under the hood. For example I have been studying the guile sources extensively by experimenting compiling guile scheme to native code, and making an rtl compiler. I'm pretty new here and actually not trusted to make such a pillar of a component of guile. I'm pretty fine with that. I really enjoy the programming effort and will of cause use this experience in the open discussions how to churn out these things in the end when the trusted hackers starts hacking. Another reason I do not want to promote my code is because it's learning code, e.g. I'm not that good and experienced programmer to churn these things out to the best degree. The native code works, can be 2-3 times faster then rtl that maybe is 50% than guile-2.0 and it can be made working. But it's not the best you can do!

My approach is simple. Just take a byte code and inline a C-stub that represents that code. Many operations would be just inlining essentially a function call out to a C - stub that does the meaty work. The rest, could be efficiently coded with a few cpu instructions. The fear with this approach is that we will end up with a bloated code that blows the instruction cache. I'm not sure about this, I just observed the number of bytes to encode a simple function was less that encoding it in actual byte code so I do not buy this yet.


Nah the main problem is that I don't take advantage of cpu registers. For example there are JIT engines out there that does register handling so it is important in order to get up in speed. But I wan't the engine to be coded in scheme - which maybe is stupid, but after coding quite a lot of assembler in scheme I'm convinced that getting that functionality into scheme is a boon.

Another big task I have been taken is to make sure guile can compile scheme down to the rtl VM that wingo has coded. It was impossible for me to do that on a clean piece of paper so I took the old compiler to the old VM and molded it into produce RTL VM code instead. Cool, but remember we need to make use of registers! And the rtl vm representation as the old guile VM representation is not optimal to use for finding out the allocations for the registers. We need something new and Noha has been starting coding on such a compilation scheme. But Now I now all the details needed to produce nice rtl vm code - so I can be a good help in his quest.

Happy Hacking :-)

I order you to try the next calculation

Here is the problem I generate sequences of elements in parallel but they don't come up in sync some elements are dropped and some are added on one, both or the other.

Well this can be solved by plain old generators. But I want the backtracking features of prolog at my fingertips in order to generate this - it's nice, and I also would like to, at will, save the state and later redo the calculation with perhaps some other settings of some global variables

So again, the kanren way to handle this can be done, but if one want to put everything in tail call position and not use the common trick of sending info by returning the function the normal way I really don't know how to code this. With kanren you can use variables with values on the heap and set! them to update information, this can work, but I would also like to store and retrieve the state and then letg variables are ideal to use. So I coded a /> ideom using this and this is how it look


(%||% ((next-1 ((x xx)) (f 0.1 x))
(next-2 ((x yy)) (f 0.2 x)))

;;Guard phase
(if (condition-1 xx yy) (%update% next-1))
(if (condition-2 xx yy) (%update% next-2))

body-of-the-rest ...)


'%||%' has the model


(define-syntax-rule
(/> ((Next ((x xx) ...) code ...) ...)
body ...)))




this idiom will generate in parallel values of x that is copied over to xx and yy, then depending on the values either xx or yy is updated if they are out of sync, if they are in sync then the body will be evaluated and finally when the body backtracks both next-1 and next-2 will be updated. Nice It does what I want. I can also use (%update% (next-1 failure)) if failure is a a particular point inside
the code in next-1. I keep it in a form that allow me to use the implementation in languages without call/cc like constructs, it uses almost no stack space if proper tail-call's are implemented. It properly and magically manages state so that it can be retrieved or in case where the state is not stored appropriately unlink the overhead so it can be GC:ed. If speed is needed well, then consider using generators or the let version of the code. For the ones that thinks that functional code is the golden egg remember that I do employ functional techniques, for example in the unique function that I implemented in the previous post, the list could be replaced with a functional tree in order to improve scalability. Cool In order to be non-functional I need to be functional (partly)

115 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!