Older blog entries for chaoticset (starting at number 13)

How often does one find themselves working on multiple projects at once, leaving some for a time (even weeks or months) before touching them again?

I'm trying to find out if I feel okay with the idea, or if it's some sort of slippery slope to slackerness.

Okay. Some time with a pad of paper and some Dew has produced the following rough outline of subroutines:

  1. included -- takes the incoming value and returns firing rules and percentages of inclusion in a hash
  2. fire -- takes that hash and returns the graph points for the resulting rule intersection

I'm already working out a sub called get_midpoint that takes an incoming set of graph points and produces the X value of the vertical line that would divide the shape defined by the set of graph points into two equally large shapes.

Those three combined will take the value from a set of firing rules to a graph to a final scalar value.

More math. I have a solution for the midpoint that involves a whole bunch of odd terms and whatnot, so I have to find some slightly more advanced algebra resources to determine how to resolve that to one side clean, one side icky but solvable.

Frustration about the math model is building, and I'm going to log off and spend some time with my old friend pen and paper to work the kinks out and get the proper formulas translated into Perl.

This math model may be wrong, it may be totally wrong, it may be completely counter to what Kosko meant when he wrote the damn thing, but by Chao it is going to sing and dance when I code it up.

Okay, despite every worst effort on my part I've managed to locate the heart of this thing. It's the curve-calculator that I'm about to write, and was previously an ucky thing that deviated behavior based on the number of incoming points.

Translation: I've started to work on the general method that will actually work, instead of the hacky-specific method that didn't really work at all.

I'm demoralized but I'm going to fix things later today.

The new "average" is going to be the midpoint in terms of area, the bisecting point, instead of what had previously appeared to be correct (which was the point where the curve is "average").

I wish I wasn't such a math doofus sometimes...

Okay.

Apparently I have botched the implementation of this model so badly OR I have misunderstood the model to the point that this thing is, as stands, completely unusable.

My worst nightmare is true -- I've modified things so that right-triangle rules can be handled the same way as isosceles triangle rules, but there's one firing position that produces a curve well above the average.

This can't be right. It's got to be half the volume on each side, otherwise these results can't possibly exist in some cases. Having said that, I'm going to recode it tomorrow that way.

I'm working through the one-rule now, and it's not as easy as I remembered it being when I dismissed it a week ago. Dammit.

I suppose this would be a good rule of thumb, eh? "One of the first three things you don't bother doing at the beginning of the code because it's so trivial will take up the last 30% of your time."

Anyway, it's slow going but some of the handling subs I wrote in the two-rule version are helping me through the one-rule, so I guess that's a good sign.

Okay. Two-rule dealt with. Reasonable results returning.

Now, I deal with the fact that I incompletely dealt with the single-rule case. Sigh.

On the other hand, I feel reasonably close to a model of how to deal with X rules firing, which would be a nice thing to have. Very, very nice thing.

I'm thinking also that if this thing is too slow, I may be able to write a ponder sub to precalculate values and store them in an easy lookup table. (While this violates the malleability of the fuzzy systems, it could still incorporate learning by having cycles -- experience cycles where precomputed values are compared to optimal results (as determined by actual humans, perhaps) and then a learn process where adjustments are applied to the rules through a statistical method, then another ponder cycle to precalculate for speed, then...you get the idea. Rinse, repeat.)

Well, there's good news and bad news.

Good news: Last night I figured out how to solve my problem (thanks again for the help, but my knowledge of calculus is miniscule, so I'd end up researching a lot more of those terms than with this homebrew which is, probably, the same damn formula. I don't eschew knowledge, I just want to finish this thing ASAP) by scribbling all over a piece of paper for several hours. I got to bed around 5.

Other good news: Flare, for one, because I honestly think it could have some interested repercussions. Lots of neat stuff could be done with Flare that would have a natural storage solution with XML data storage, and lots of neat stuff could be done with existing XML data as Flare data structures, IMHO.

I'm not smart enough to do it -- yet -- but I will be, dammit. Flare. I can't say it enough.

So, my next step here is to code the rest of the solution I whanged out last night and then start refactoring, assuming I can make the solution work immediately. My main problem is one that I believe was discussed in the assistance one diary entry provided, that of curves with two intersections with the average point and no more. (The two "end" intersections don't count for my purposes, so I need to figure out how to make sure there's a middle, and my model doesn't cover what to do if there isn't one.) I'm considering a few attempts at proving it's possible, just to make sure that this is something I should be worrying about. (With my incoming data, it probably isn't. The truly worrisome case I could see is where there are MULTIPLE points past two, seven or eight averages in the middle that I need to worry about. Sigh.)

So if this doesn't pan out then I'm going to take the tougher plan and figure out a way to bisect it, which should provide the results I need no matter how many "average" points there are. Goddamn modes.

4 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!