# Older blog entries for tampe (starting at number 96)

29 Dec 2009 (updated 16 Jan 2011 at 07:17 UTC) »
Christmas days, thinking differently

I spend this Christmas, reading Simon Sings Big Bang. And as Simon pretty much says, I say as well, what a wonderful world.

I'm mentally affected by my education that I constantly ask myself if the things that is presented in a popularizing book is really what is true. Did he mean that. Sure people turn things int a better daylight when asked about it afterwords and so on. There is a constant flow on my mentally left margain. Anyhow I'm now really impressed by the puzzle that scientist made to achieve such a solid ground for the Big Bang theory.

There is always weak spots in an arguments but the core argument is really solid in my view.

I like the formulation that uses the potential formulation under the Lorenz gauge if I remember everything correctly. Then all components follow a wave equation and there is one linear first order constraints that look close to a simple continuity equations. Now I wanted to understand what this actually meant and searched for some example that would reveal that to me. And there is one telling partial solution. You could have a solution where you make the constraint a continuity equation. You will have a sort of "magnitude of disturbance field" in the scalar potential and the vector potential will be a sort of momentum potential e.g. the scalar potential times a velocity field of constant velocity c. It's a really trivial and very particular solution. But you can modify it. You can assume that if along a direction you have the same disturbance, then you can choose any velocity you like.

Now, in my view this captures some essential features of electromagnetism. A constant stream of light is not dependent of the speed of the stream and it is information that is constrained to the speed of light. Not necessary the actual physical or disturbance transport.

Note that if we have just one stream the transversal direction has to be transported with the speed of light and indicate plane waves.

Even if this is a simple particular solution. One would probably be able to deduce Maxwell's equations after closing the space using Lorenz transforms

Ok this is just a mathematical play, but it poses from my position of knowledge very interesting question. It's just some speculation from a guy that is not an expert. But I still hope that I've teased your imagination so please have fun, play with the mathematics, enjoy the stars and have a very happy hacking new year.

22 Nov 2009 (updated 16 Jan 2011 at 07:20 UTC) »

Exploring,Fighting and Enjoying

I'm coding on a library called BoopCore right now. The reason is that I was thinking about how to make the code for the new version of Qi, called Shen.

Oh, I did a small test with the Einstein riddle. Got basically the same speed as gprolog so the "prolog" compiling part is not too bad.

My idea though is that it should be used by designers of PL tools and for people with specific needs. As an example backtracking and unification can be really customized and fast.

The program itself is pure magic, e.g. has some really poor design. But this is ok for a first version. That version has to have all the features, which is not decided from the beginning, but grows as bugs are found and new features has to be implemented. This means I am Exploring ideas, Fighting the beast to get them implemented but throughout the whole process, enjoy it as if it was the best wine in the world.

Cheers!

Type Type and Type And this is what I get

I have been quite for some time here, work is calling, family is calling and a new version of Qi ready to be explored was released. Now in the mean time I have been studying the sequent calculus and the Qi source code to learn how to mod it to my taste.

So perhaps my coolest hack is a type driven macro framework. So a little code,

```
(def-tp-macro ex1
X                  : A  -> (? Usage number)

[X : num  Y : num] : A  -> (simple-num-num X Y)
where (element? simple Usage)

[X : num  Y : str] : A  -> (simple-num-str X Y)
where (element? simple Usage)

[X : num  Y : num] : A  -> (complex-num-num X Y)
where (element? complex Usage)

[X : num  Y : str] : A  -> (complex-num-str X Y)
where (element? complex Usage))
```

The first line says that at any kind of arguments and type of evaluation context first ask for it usage and the return values will be stored in Usage. This will send out the type-system to track usages according to some mechanism, this is done the first time. The next time if not inhibited (ask? Usage) will be negative and the system goes down to expand according to function signature and the properties of the The Value of Usage. In (? Usage T), T is the type that is returned from the function in the non ask context, e.g. (+ (ex1 X1 Y1) (ex1 X2 Y2)) should type-check!!

It works stupidly by type-checking repeatedly and whenever something is asked for a retry is done. A process that can be made much more effective by remembering the type deduction and use this memory at nonvolatile sites in the deduction tree a property that somehow has to be passed up the deduction tree. Anyway (ask? Usage) will be true if someone later in the deduction have inhibited it. Such if a ex1 value is used in the argument of ex2 that also asks for information in this case ex2 inhibits ex1 when it asks for information. (To speed up this deduction process ex1 should be marked as nonvolatile)

Actually macros can work on code fragments like this

```
(def-tr-macro ex3
[adv [X : number Y : str] Z : symbol] : A -> [Z X]
...)
```
This is a quite general construct and of cause the process of macro-expansion, usage information exchange and so on can be repeated recursively.

So the macro can use information how the result of the form is used later on in the code and under what signature the type-system thinks that this form will be evaluated under. So there is a message system that works in both directions in the code graph (what signals do I get from the arguments and what context or what features of what I provide is used.

There are weak points to this construct but I think now have one big chunk that can be put to use when I go deep into optimizations of loop macros. At least I now have a powerful tool to do optimizations by using the computer as much as possible and my coding fingers as little as possibly

1 Jan 2009 (updated 16 Jan 2011 at 07:36 UTC) »

The number of dimensions are, well infinite

I have a standard prediction of a science breakthrough that comes after having studied the standard model.

In the standard model of physics they combine all natural forces but gravity in one formulation. The basic building block there seems to be fields that have properties that comes from the Yang Mills and Pauli theory. To understand the standard model it is good to think about what Yang Mills and Pauli try to describe. I played around with it a little got an understanding. It feels like these fields describe disturbances in a kind of ray fluid.

Lets move some atoms

11 Dec 2008 (updated 16 Jan 2011 at 07:39 UTC) »

Lets type unit tests

Maybe this is plain stupidity, but hey it's different from the usual path.

If you have full control of the type system, what can we do?

Well type is a flow of information so basically here you have the possibility to do meta tracking.

But lets consider another idea. Assume that we make an abstract class, with no implementation.

```
(class stack A         Abstract
(clear )->NIL
(push A)->NIL
(pop   )-> A)
```

This follow sort of the standard specification used in Java C++ C# etc.

Now consider another "concrete" class

```
(class mycl  A         Impl
(clear )->NIL
(push A)->NIL
(pop   )-> A))
```

Then typically you subclass mycl from stack and you go. The idea now is that in order to be able to subclass mycl from the abstract class you would have to pass a unit test e.g.

```
(class mycl) => (class stack) if (stack-unit-test mycl)
```
Customizing the type theory adding these tests with cashing would be .... different.

As a side note, this is how I view object orientation abstractly. You have a set of code snippets in the form of different function implementations and object orientations just put names on subsets and makes arrows between the snippets, now the process of making subclasses is just rules that makes arrows and rules how to select which code to evaluate or the context to evaluate a function when using one or several name references.

If we all looked nice and same, the ugliest person would become attractive

/Snorgersen

Words can dance

My day work is calling for serious attention and I had the usual period of being tired. This is sort of strange, but From time to time I need to sleep more. (This public effort of mine is done usually after 21:00 when the rest of the family relax before the TV or is sleeping)

Anyway this means that I do not pay attention on detail right now but spend my spare time gathering inputs, Reading and just think.

I have been playing around with texmacs, mainly trying to see how code look like when you allow for a little more typesetting then your favorite editor. I concluded from this effort how badly needed monospace is. The reason is that I tend to align stuff because things is easier on the brain when you have alignment. This means that the brain will learn to use the difference between two rows in it's visual parsing. Letters not being aligned will just make this activity a pain (to do this make sure your tired, you then notice bad style). On the other hand one feature that makes for better reading is subscript and superscript under monospace. The thing is that it's ok to change the space in a few blocks. Personally I would also like to have a few extra symbols too use while programming then what's common. But this is old stuff, emacs has ok support for this and I've used it for some of my LISP or Qi programming. I don't think that Big Integral and sum operators, square root constructs and fractions is so badly needed for programming that I would like to suggest implementing that in a programming editor. On the other hand the ability to use different decorators of your symbols is attractive if they don't disturb the spaces too much.

So I concluded that decorators, sub/super script that disturb the spaces as little as possible together with unicode symbols would be my favorite typesetting extensions to the common craft of programming.

Let's sleep on it

16 Nov 2008 (updated 16 Nov 2008 at 21:24 UTC) »

You can do everything in assembler

I first want to say that layout manager and such really is something that is old technology and already included in all modern gui's, but it is always good to just focus on a thing and think a little to see if things can be improved.

When drawing diagrams in powerpoint I always get very frustrated. the reason is that I believe that something like a well implementation of the tools found in the TeX community to draw diagrams, would be so much easier (for me). It would be so cool to be able to use the TeX tools to layout stuff in diagramming and gui construction. If you are like me I suggest to read and play with those TeX tools like xy-pic, great fun.

Let me also point out that QP is probably not used because simpler solutions then what I described is probably already implemented. The point using QP is that you can specify that not only X is close to Y, but allow the distance is allowed to vary if there is conflicting constraint(s). It is also a quite general framework if you stick to linear constructs. What I miss is a framework and tools, to handle and develop layout managers. There should simply be a tool for me and others who now the math (perhaps do it using scipy) to develop some great markup framework to do layout of different things. I could imagine it being based of QP and also of cause, as a special case Linear programming. One could think of including other mathematical tools as well like convex programming.

As an example let's discuss about implementing a tool to make sure that the layout has straight left and right margins. We stack a sequence of objects X1,X2,X3..., with a smallest distance d1,d2,d3,.. between X1,X2;X2,X3;..., And we also say that we punish a deviation from di by a coupling constants C1,C2,... basically you could automatically select Ci,di according to the sizes of objects (xy-pic in the TeX world has such ideas in it's layout strategy of diagrams.) Now to specify a conflicting constraint to let the total distance of the sequence to be constant in order to achieve a right margin. You never get an exact right margin, but by adjusting the spaces between words or objects you may make it happen. The good thing using this is that you will always get an answer and depending on measures how well every constraints is satisfied you can decide to remove objects to the next line or include new objects. Also you could fire up guis and let users who want to tweak the coupling constants, do it to adjust the layout.

Finally here is a trick I use to deduce the components in a quadratic criteria x'S'x + v'x + d from other representations without thinking. When I first tried to do this I went for the white board and started to deduce things by hand. well that is not needed. just define a function f(x) that is your definition of the criteria, that is most easy to define. Then use your computer to evaluate f(0), f(ei),i=1,..,n, f(ei+ej) i,j=1,..,n. Using this information you will be able to deduce S and v. I find this a very neat tool. It is written once and will save you ton's of time.

What I fear is that we mostly develop to make things simple for the less advanced users because most users are on that level which means that this is ok. Beeing an advanced user means that you don't get the benefit of most development and that means that you are not 10 times as effective but perhaps only 2 times and you loose.

Cheers

In the beginning there was a Layout

I think that a great layout engine should be the base of a great Gui. So here is some thoughts about layout.

I don't now about all quirks of TeX, but basically, it seems, its that art about stacking rectangles where the rectangles themselves can be based of stacked rectangles and so on. It is a simplified description but you can see one fundamental point in this argument, you can use an object oriented description of the rectangles. Actually you should be able to use the TeX engine to do a automatic layout of a Gui. Anyone now anything about how to do this in a nice way?

Automatic Layout of graphs are cool, and I ones made a huge call-graph of a python program on a A2 paper with the help of the ghraphviz package, really nice. In graphviz you can have nodes that are rectangles. You should then be able to use the graphviz engine to find the coordinates of your gui elements as well to get a certain kind of layout. Is there such a link out in the wild?

Constraint Programming for layout is cool and of cause not a new idea. You basically give constraints for how the coordinates in each rectangle can vary in relation to other coordinates in different rectangles. Something like x>y,x-y>d (x is after y and at minimum d units apart). Then you might add a penalty av magnitude, say C((x-y) - d)^2 and include all these in a quadratic criteria telling how much you like a deviation from d distance and play with that. This is a more fuzzy and general way of implementing stacking. Say that we implement stacking according to this. We might want to align stuffs in different ways e.g. want your coordinate to be close to other coordinate in some way, close can again be related to quadratic criteria. You can express that a linear transformation of a set of coordinates can be close to another set of coordinates and so on. It is a powerful concept. You will end up with quadratic programming QP.

There are some special cases. For example if you always stack in an exact way when you build your rectangles, you may skip the inequalities and just solve the fuzzy constraints that's left in the quadratic criteria. Then you basically need to solve linear equation systems. Kan we do this fast? Well in order to have as an exact and general solution and keep it simple you may start using linear algebra routines of full matrices. The order of this algorithm is n^3 if I'm not misstaken so a layout of 1000 coordinate values will take 10GFlops to calculate, Maximum space is about 100MB for doubles. (The transfer to the computing unit may actually be much less due to sparseness of the matrix)) Now considering that GPGPU solutions can deliver 400GFlops (see one of the examples at the CUDA site (with a TESLA card)). You may now understand that you in a few years when the GPU technology have been standardized and matured, you will be able to do some cool things with layouts. To the reader, for cases with sparse matrices you will find other mathematical tools and GPGPU may not be suited.

Solving quadratic programming problems with constraints is if I remember correctly heavily dependant on solving equation systems as well so GPGPU technology is probably welcomed for this case as well

Can we dream up more layout tricks? Well some kind of morphing. Say that I have a set of symbols and wants them to be more square like. How can we implement this? Maybe one way is to make the best fit of a square to some set of coordinates from a symbol. Then you can calculate the closest points on the square for each point on the symbol. A new symbol with the same coordinate identities will then be searched for so that you weight the variance to the square and the variance to the old symbol and then hence morph between them. I don't now how this works, but there are plenty of tricks like this that have the potential to implement more fuzzy notions about a layout.

And then the content begun

Robot Robs Retrospection

I got some good laughs today, cool

Anyway It got a bit too late yesterday and I ended my last post without checking it very well. Anyone wanting to implementing delayed variables should try to skip the case when you cannot know for sure if things yield or not. I just don't see any uses of it and it can quite easy create infinite loops. In the description I presented there seems to exist some good seeds of ideas though, that can grow to a checker that can warn about dangerous constructs, if you still need to implement this case as well.

How do I create those delayed variables. Well I'm lazy. Each delayed value is composed of a value and a list of lambdas. when I know the value I fill it in and then execute the lambdas which will propagate the value through the whole delayed graphs. It is expensive of cause but to do it this way is, well my way of having fun. Anyway normal stateless functions can be blessed to use delayed variables. It works by checking if an argument is delayed if it is, the correct lambdas is added to the incoming delayed argument(s) lambda list. This lambda just checks to see if all arguments now have materialized and if that is the case calculate the function and put the value in the returned delayed value and then evaluate all the lambdas in the returned delayed value lambda list.

Everything is not what you think it is, reality is an retinal image, Nothing... makes you think.

My God

What so cool with Internet and forums like advogato is that ideas, words and units of thought is flowing from mind to mind mutating, mixing, replicating, bouncing, In a very subtle way. The communication is many to many and this is indeed important, we must be able to communicate in order to make sure that the world won't fuck up.

Introspection is a nice powerful feature that seems like a hot topic these days. I try to practice this a lot, not only in my code, but on myself. It is kind of cool to dig out the association and logic behind an idea.

One idea I got was to check out scheme. There is some nice food for thought in that community. I've planning to read about lazy evaluations, but here is my current view on delayed evaluations.

If D1,D2 is delayed, basically f(D1,D2) :D3 is delayed as well represented if any of its arguments is delayed. Now this means that D1->D3 and D2->D3 and you see that what you have is a directed acyclic graph (DAG) that represent your overall delayed state with a pointer to a function at each node. So it is not at all a difficult concept. But what about and (if D A B), D is delayed. Here it seems you need to store the value of both A and B in order to be able to use it later, this is actually a difficulty. I will need only one of A and B but I have to get both values in order to later retrieve the correct value. If you have recursion in A or B you may find your way into an infinite loop. So if A = (g C E) you need to store (g C E) and continue so recursively. With no gotos and internal state in your code you will then be safe. If you have state you then need to store the state related to both A and of B. A key issue here is to avoid relation pollution e.g. you do not want to store state that is not needed to retrieve the value later.

In my (switch-if P A B) construct if P A is incremented else B is incremented. I try to avoid using delayed values in the P but storing the state of A and B may seem to solve the problem. will it will be safe? No, there is another problem. A and B will contain signals backwards in the system about yielding values, about finishing iteration and skipping stuff. So depending on the possible signals send back you will have both complicated knots to untie as well as more trivial ones. Some of these questions can be mitigated by simply returning values of special type like a skip value. A quit value which also can be implemented.

Consider that we are under a delay. And now the delay is activated with a skip value. This is the interesting construct. Consider an update of internal state:

```
(update Var Vals Oldval)
```
The most simple choice of logic is that if any Val is a skip value, Var is a skip value and equal to Oldval. A function just returns a skip value if any of it's arguments is a skip value. This logic is then transported in the delay graph.

The finish signal is a little more subtle but basically each object as a finish value Vf, and if object Y at an update gets a finish value then it should fire it's finish function to update Vf, the variable representing the finishing value. Else the transportation and action of the finish type is like skip but of higher precedence.

Now the actual iteration is something like

```
(do
(make-generator X)
(loop X))
```

```
(define loop X
for:
(X.next)
Ret = (relate Ret (X.val))
(if (finish? Ret)
goto exit
goto for:

exit:
X.Vf
```

The relate is just a relation that represents the transport of the finish all the way to the current Ret if this has been delayed and you have to iterate a few iterations more than needed.

Trying to handle the yield signal is delayed. No just kidding. You need to do the following

```
(do A B C)
```
The logic is, take next A, if A is a yield stop! and use value of A. the next time start with A again. Else continue with B and so on. Now if B is delayed and might be a yield you need to store the state of the do e.g. the state of

A,B,C and the internal state of do e.g. a program pointer. Then at the next update of this object if the delayed value have materialized you will then just continue with the evaluation of the do construct and everything would be fine else you will have to just chain the actual updating and return the delayed value.

Can we have infinite loops? Well, At each step you will have a tree of objects that you update in the natural order, when you update subtrees will collapse an new subtrees will be created. A delayed object is returned at a certain object and then transported to objects below it in the order. Hence if you keep a logic where you always will fill in the missing information at the same object that you issued the delay you will be safe.

I didn't realize all this before I started writing. This is why I write, enjoy

87 older entries...

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.