# Older blog entries for tampe (starting at number 86)

My best practice

Many times you hear that it is good to read for your children and it is.

Many times you hear that it is good for your children to be home with them when they are small, and it is.

But I'm so tired on the focus on children, the thing is that it is good for _you_ to take care of your children. At least that is my experience.

I have found out that caring for my child have increased my understanding of humans. I have found out that reading and telling stories has improved my verbal abilities a lot, my memory is better and my creativity is well ahead of the times before her birth. I found out that my wife can bring in serious amount of money to the family and among all she is not hitting the wall as many other mothers in my country. I found out that I do more in less time. I found out that I can work all of the time all day, all night, enjoying it, not feeling stressed a bit. I guess I just found out about life.

The trick to achieve all this is UBNA or ...

This is a constant fight. I realized the key part to success was to try use my brain when doing stuff and not put on my autopilot or what I call my arse. Things might be boring but If you use your brain _everything_ can become interesting. Caring can be boring, Caring can be very interesting. Reading can be boring, but if you put your heart into it, you will find out that your mind and voice when playing together is like any other cool rock-band you like and after a while you will dig it.

yielding stuff has it's complications, consider this generator,

```
(+ (do (yield 1) (yield 2))
(do (yield 3) (yield 4)))
```

(do (yield 1) (yield 2)) first yield 1 and then yields 2 if there is no yield in the other argument that argument will not be updated, but now we have another yield statement. How would you solve that?

I just consider the evaluation of the yield as a parallel statement meaning that the above construct will give

```
(+ 1 3)
(+ 2 4)
```
The hole construct is a little isoteric but you may want to have the possibility to first yield all the elements of the first arguments and then all the elements of the second argument or any combination you can dream up This is a sequencing patterns that I have not considered but it's interesting to make a note of it in the head. Implementing a tool to allow all possibilities is not that hard but if it is worth it, I don't now.

A strange day, is also a day, Cheers

TeX

Look at it, it's lovely, it's a mathematical formula

I check out the texmacs project regularily. I think that it has such a potential and there are some very interesting ideas floating around in that project.

What are the optimizations I've been thinking about? Well my method although slow should yield a taste of how it works, if it works and shed light on interesting patterns, not new, but interesting. So I'm just looking to find the elements, and how to combine them. This means that I do a tremendous amount of function referencing and consing! Just inlining all the small functions and maybe some tools around that will be enough (the consing is due to me not using multiple-value-bind) maybe not. Other loop macro framework that is interesting and fun to check out, usually seems to flatten the stuff and this could be a good pattern to mimic for success. I have tools that manages the closured internal variable state so flattening the variable space can maybe succeed. Now flattening everything and use gotos might be a bit clumsy and too clumsy, much of that can be done by the lisp compiler perhaps, it might be more wise to concentrate on some kind of loop unrolling helping framework, to find fastpaths and cut out unlikely paths to functions, I don't now. All I know is that while I'm coding with functions and closures I keep in a parallel universe in my head a version of the code that is flattened. One note, some patterns can function on the stack, but not all can do that, If stack variables are faster you may want to think about how to make as much use of the stack as possible and minimize the need to use the heap.

This might end up as a quantum dot of importance, but I don't care, at worst it will result in a document with cool patterns of computing, that's good enough for me

Well, now I need to finish the slow slurry singularities of black magic wormholes between I and I + 1.

Cheers

1 Nov 2008 (updated 1 Nov 2008 at 13:44 UTC) »

Oh my, its high up here!

I should remind myself to quote Newton. But I don't. Why? Well that's implicit. The thing is every good thought I have can probably be found in an scientific archive somewhere or in some project X out there and My whole experience is based of others efforts. That's how an open system works and that's probably a key fact to explain the technological success we have had the last centuries. So I'm blowing in the wind and we all are blowing the wind.

What I'm heading at is my clambda stuff. This stuff will be basically c morphed into a more lispy syntax. That is, there should be a mapping between C-syntax and c-lambda and when you managed to grasp that mapping the translation will be simple meaning that you can actually use the c experience. Now why this exercise? The main reason is that I want to build my own object system!! ... Just kidding, When I coded in C i just recognized that stuff I wanted that is not that hard to do in lisp is difficult in C due to the inferior pre-processor. Thats It, I wan't to add a lispy pre-processor to C,C++,Java,C#,...

I choosen the Qi flavor of lisp because, well, I think it is the best base to start from and I wanted to learn it

Now to mentally sell this stuff and have fun I would need to implement extensions to C that makes a difference e.g. with some batteries included. Ok here are some ideas,

1. Meta Object System, He he, I'm not kidding here, look at Lisp, they have actually abstracted out the action of dealing with objects in a tool that constructs object systems - this is a neat idea and why not bring that over to the C family so that everybody easily can make their own home brew object system. (Python seem to have similar system as well and you may find more examples)

2. Loop macro facilities This is what I have been discussed and worked on during some time now.

3. Static type checking Tools exists today but I would like to have better meta facilities to explore new inroads and versions of it and better integration.

4. Custom types, deduction of types. Qi Has an advanced type system, cool if we can use some of that.

5. structs vs file-systems Structs in C are a much like file system without the tools of the file system. Well Using C++ we do have the potential of a file system but it would just be a cool idea to just tighten these two metaphors together to see what it brings still keeping it light weight. The idea is that we work with the struct incrementally like in a shell we consider files, we consider directories, we don't consider content, we just manages the content structures and the metadata of the structures and at a point the struct is set in stone e.g. this is a metastuff to define data structures. Not to manage the content.

6. Different views of code If you want to study very difficult code because the problem is, difficult. The notation system is very important for your success. Long variable names and whitespace is not well suited for this. Hard stuff just demands that the patterns inside in it can be recognized and matched by your experience. For these use cases I need some tools to design the notation systems. My idea here is to use a simple notational macro facility and a tool to layout sheet cheets information well alligned in a column to the right of the code. E.g. I do not want to use tool tip technology, I want to layout the translation information well visible in a column to the right. Basically a macro is associated with it's usage, it's usage for the first time, it's usage after beeing not used for x lines of code and so on. The idea is that the definition of these macros should always be visible according to some logic

7. Transfer of Meta Information True Openness will allow users to tag their information and let these tags transfer through the system to basically do more with your data. Think of associating each byte with a meta information, have tools to define how all basic operations handle the flow of information. This framework can be used if you want to track your content, for security and so on. It is wothless for doing DRM cause the user is in power of it and this is why it is interesting </b>

He he in 10 years I will be finished with this and it will rule the world.... No, of cause that will not happen, but It is so fun blowing in the wind and what is important is progress and that my friend, is all us blowing the wind.

I'm falling, Touching ground, Over and Out

CTRL-w

I'm using emacs, but it has it's drawbacks. When editing this blog I tend to accidentally hit ctr-w. If there is only one open tab in firefox, bam!, firefox quits and I cannot find the text in the history. I Looked around for a fix, looked at firemacs, but decided to wait for a new upcoming plugin that will let you use emacs directly instead, e.g. EmbeddedEditor . Meanwhile I will make sure to have one extra tab open or just paste in the text

Oh well here are some Loopy cranky stuff: Consider,

```
(for X in (li L)
(for Y in .X
(sum Y)))
```
Can you see two possible interpretations of sum?

Here is one:

```
(for X in (li L)    (A)
(for Y in .X    (B)
(A sum Y)))
```
E.g. yield the sum of all Y in the list of list. And of cause this will yield the sum of the elements of the last list in the list of list L,
```
(for X in (li L)    (A)
(for Y in .X    (B)
(B sum Y)))
```
(we could have skiped B here) Permutations could be expressed as
```
(for X in (li L)    (A)
(for Y in .X    (B)
(coll ((A X) B coll Y)))
```
This is a complex construct but to dechiffer it we could have used the notation,
```
(X B coll Y)  ==  (update in B (coll.X Y))
```
E.g, coll.X is multiple collectors parametrized by X and define in X defining scope, A. Each of coll.X is updated in scope B by the value Y.

Lets put in a mental overdrive, lets talk like spook, and walk the talk.

Consider walking in a tree, Personally I'm very found of doing that by recursion, these two models fits so nicely together. When walking and you are at function G, the stack trace could be represented by for example "FHFFHHGGGHG".Do you see where I'm heading with this observation?

```
(define atom-12
X  ->  [atom-12
(FromStart          Here             sum 1)
(FromLatestFunction Here :default -1 sum 1)])

FromStart            = ^*
FromLatestFunction   = F[^F]*
```
permutations can also be used by using (Matcher Var) eg, parametrize by Var defined in the scope defined by the Matcher. See, using abstractions makes us trekkers in a larger universe then we set out to explore in the beginning. You can combine any of the current knowledge about how to do matching on sequences and use that as a Matcher, you can use perl regexps or whatever. The simple un optimized implementation is to just use scopes in the stack as objects that you can duck-type in these constructs, define a mechanism to label scopes and then just use pattern matchers and some simplifying notation for the most obvious and common cases.

A matched objects has a start, that start is what you would like to use to define the needed scope and associated variables. Another possibility of matcher definitions is to start at the current scope and move backwards. One should also consider cases with multiple matches of the same pattern and some way to handle this like doing stuff in multiple channels meaning that secondary collectors and accumulators get a distributed just like in matrix algebra

```
A*(a + b + c)  -> A*a + A*b + A*c
```

Calling Earth / Good Bye / Over and Out

Cool Cranks

I'm mayby too kind to people, cranky people have a tendency tend to want my company (temporally), and you now what, that's cool.

The thing is I listen to people I meet and I always try to deliver food for thought, play along in dialogs with my character of the moment simply because it makes a difference.

You see, I,ve got this idea, that If I (at a reasonable rate) answer someone a little cranky, which usually is quite intelligent but by some kind of accident or sickness went out on a limb, they will become a litle less cranky, a little less sad, a little less dangerous.

And you know what, some of these dialogs become soo cool that I keep them as funny/deer/cool memories.

This happens maybe ones a year to me so it is not a big effort.

Take Care

Enjoy being wrong and you will do right

I have piled up quite a lot of the loop stuff, time to actually make it work. The stuff has grown and I need to start considering how the code should look like when using the machinery. I want to remove the dot in .+ .* and so on in the default mental model and instead use some kind of mark if you want to use the original function.

Actually variables V that contain generators comes in three mental models. (At least that's my conclusion)

1. V is updated at the locations they are defined in the loop, but values are collected wherever they are present if it is in a "return value position" - this should be the default in order to get loop of loops correct

2. V are updated whenever they are present in the evaluation paths if using these variables be sure to mark them with like ?V? or something (The question is if they are useful) but it is a mental model that is consistent

3. V has to explicitly be updated (so we should do a .V or something to mark them special) These Variables has it's uses but it is not that common

It is interesting to note that functions invocation has to be manipulated for cases like

```
(f (sum V)  H K)
```
if the function has a switch like if or case statement (switching the next value evaluation between) you will get the wrong behavior so again you would like to translate this to
```
(.let ((A (sum V))) ((f A H K))
```

As you all noticed the code I present here is not 100% working, actually they are more or less dysfunctional but that's ok, I usually spend the night correcting the mistakes. I consider it more important for my development to enter stuff into this diary than to keep it absolutely 100% correct. The thing is that it is a nice exercise to try formulate the abstractions floating in the head with words and it's a good creative stimulus.

I'm not happy with the parser examples but decided to push down the importance to perfect that development until I have a working copy of all the current ideas, cleaned it up, and release it under some project name. The code will be working but not especially useful because it's main purpose is to be a functional specification from which you can deduce unit-tests. Yes!, the specification documentation is code!

Cheers

Am I unique

Today I consider the idea of uniqueness of identities and mixing identities. Basically the problem is that for a set of objects I want to assign an identity or a set of identities so that construction of identities of combinations of the objects are fast enough and also give enough randomness to be used as keys. It is a good exercise to test out the generator framework and as always working gives inspiration.

So the first question you should ask is do I need to guarantee uniqueness? Can I assume that collisions will be negligible. Sometimes clashes is rare and a cheap extra check at the end of the analysis can be used to verify correctness of the analysis and in the minuscule number of clash cases a mitigating strategy can be used.

Hmm, assume that we have objects X1,X2,... and identities I1,I2,... What I'm wondering is if one can use random double precision numbers between in [0.1,0.9] as identities. Then the combination [X1,X2] could be build by simply use (R * X1 + (1-R)*X2), with =.1<R>0.9 a random double. This is rather fast to compute and should mix well and a key should be possible to extract from this operation. Anyone knows about the properties of such an approach?

Here is another approach where clashes do not happen for correct choices of p and q which also has the potential to mix well.

```
q**(k1*k2*k3..km) mod p,
```
if you are going to mix X:s and Y:s (X x Y) you could generate identities q**i mod p, for X and q**(k1*i) mod p, for Y and just multiply the identities modulo p to get new identities. This could mix well, lets happily hack some generators for this...

```
(define id-gen
K -> (mlet ((N   (for X in (li K) (mul X)))
(P   (get-prime N))
(Q   (MOD (get-prime (* 100 N)) P))
(R   (for X in (li K)
(mk-ar (.powmod Q ** (mul X 1..)
mod P)))))

[P (slet ((N (for X in (ar R) (mk-ar X))))
(mk-g [(/.  IS (for I in (li IS) (coll (.NTH N I))))
(sig IS (for I in (li IS)
(.SETF (.ELT R I)
(.MOD
(.* (.ELT N I)
(.ELT R I))
P))))
(/. IS (for I in (li IS) (coll (.NTH N I))))
(/. _  (setok))
|(pop-like)]))])
```
The input K is a list that describes the number of objects in each channel (number of X:s , number of Y:s) and so on. N is simply the total number of combinations-. P will be a prime larger then N according to some method (like the first one) Q is just another prime number (we assume that P >> 100 here) R will be q**1, q**k1, q**(k1*k2),... not including N. The 1.. in mul tells the transformer to start from 1.., eg 1, k1, k1*k2, etc. powmod is just a function to quickly evaluate q**k (g**k = q**(k/2) * q**(k/2)).

The slet is a let construct that allows the value of N to be pop and pushable e.g. internal state. N just initializes to a copy of R. mk-g will generate the generator (this is poor mans object system but it's prototyping so I don't care very much and the functionality is similar) Now the list should be a list of lambdas according to 1. get the value e.g. the identity and we have a selector here that picks up the correct identity channel (identity for X or for Y). 2. finding the next value. just updates q^(i*k) -> q^/(++i k) mod p. Sig is a macro mainly introduce code so that we only take the next value ones in each iteration. the third one is the final value and the fourth is the construct to make sure the updating is correct. and the rest is basically memory management. Not a perfect interface but you never get it right if you do not try.

What I learned from my work with this was that it's nice to have lambdas that generate generators e.g.

```
(mlet (([P G] (id-gen [10 1000 1000]))
(Fxy   (./. (_) (next [1 2] G)))
(Ff    (./. (_) (next [0]   G))))
(gen Key in (li H)
(coll (switch-if (.= Key f)
(Ff)
(Fxy)))))
```
P is the prime number used in the modulo code, now /. is the lambda construct in qi, and ./. is a similar lambda construct for generators. So G will be a generator and Fxy will be a generator generating lambda that captures in the closure G. I coded ./. by using something like,

```
(defmacro ./.
[X Y] -> (mlet ((F   (gensym "f"))
(F.  (gensym "f."))
(F.. (gensym "f..")))
(do (eval (splice [[*X* X]]
[define F *X* -> Y]))
(.mk F F. F.. X id)
F.)))
```
This is ugly but it should work. (I know pure lisp can express this logic better) .mk basically constructs a point-wise generator generating function F. and a lazy and delayable version of it in F..

Enjoy!

Ahh modern scripting power, feel the fresh electrons streaming

Two of my heros today is 1. Simple serialization of objects to files and back and 2. maps.

The project was the following. I'm doing CFD and optimization at work and have a pretty big directory structure where all the work is done and organized. All work is organized from unix shells and I tend to not use much graphical tools (although when hacking java code I've used popular IDE's to shell out the code mainly because java seem to be pain without IDE's and the fact that I'm not especiall y good at java). My observation was that I tend to not write aliases and shell constructs to ease the navigation in the structure and wanted to make a change. So todays project was to make it dead simple to capture navigation habbits.

To solve this I noted that what I want is to capture the 10 most popular navigations from any popular directory x. e.g. have a maping from a directory x to a list of popular directories. and make it possible to list them, to choose from the list and to add new navigation patterns. I also stored this information for the most popular x:s directories.

popularity was capture by move to front lists. Listing was done in a sorted order of the most popular navigations. The sorting was peculiar. basically lexical sorting with the lexicals beeing the directories in the path in reversed order (starting from the bottom directory and upwards). The result is presented in a nice textual n x m tabel with a number tag in front of the directory and and the directories was cropped keeping the last 30 characters of path

I work from about 3-4 different machines using the same home directory. So simply serializing and deserializing the datastructures to a shared file for each invocation solved any persistancy problems although expensive (but I don't notice this)

I implemented this in python, worked like a charm and I'm probably hooked on to this concept for now. (Some key-based assigned icons to speed up the cognitive selection of navigations should have been a boon but you cannot get everything, actually not hard to implement because of open picture libraries but icons in the shell is not well supported)

The interface is this.

```
d mydir
```
this is the same as cd mydir, but stores the navigational pattern via move to front mechanisms

```
dd
```
This list popular navigations from current directory

```
d 3
```
This walk navigation 3 listed by dd.

All this can be generalized but the sketched approach above should be good enough for my need.

By the way what's up with popular languishes like java! Code generation from gui:s, that's totally crazy! Read my lips ... use macros, and if you want to modify, use __macroexpand__ and if java doesn't support it, Invent it! -it's not that hard, the boiler plate is there.

Cheers

Gardening

The background, we have a sequence X1,X2,... and by some strange reason want to sort them incrementally and use functionals of the generated sorted sequences as output. So what tricks do exists?

Try explore (linearity), (monotonisity), (associativity and commutativity), (locallity) and (permutational invariance).

One of the simplest functionals are a basic sum of the sequence and of cause for this you just don't care to sort it in the first place.

S is sorted incrementally, now consider a linear filtering of that sequence and we are just interested in the end result. Most certainly you can use locality to improve performance here but you can also use the following. When sorting you will define a subtree, at each junction point. There define the value of the output (the state of the filter) of the sequence related to this part of the tree as a linear function of the input state of the filter. Doing this means that each new value will demand about log(n) operations in order to update at an insertion in the tree. So this is actually n log(n) - the same complexity as sorting a tree but with a bigger constant in front and memory requirements. Still I found this little trick kind of neat. Noting this approach can be generalized to other kinds of operations like some class of logical ones by abstracting this idea gave some coolness and it is a good little pattern to remember.

So there is a family of tricks you can do. The criteria for these tricks is quite restrictive and it is easy to get out of the n log n domain and into the n**2 domain. The interesting thing though is that this class of functionals are composable, but cannot mix well with point wise operations, the thing is that ones you have issued one of these filters you cannot in general then do point-wise operations on it and after that issue one of these kind of filters again for that to work you need it to be some kind of linear pointwise operation.

So The conclusion is that there are some nice tricks but most probably you will be bound to n**2 complexity.

I'm off make for some weird abstractions, or in other words Good Night!

77 older entries...