# Older blog entries for chaoticset (starting at number 14)

Well, I've spent a big pile of time away from here. Unfortunately, it's mirrored in my relative lack of actual development.

I've hit a big stumbling block with the fuzzy module, so I shelved it for a little while. Good news -- the game matrix solving module I wanted to write ages ago is now within my grasp! I've finally comprehended the general method for solving games outlined in _The Compleat Strategyst_ and can attempt to code it.

Once it's in place (inefficient though it will be for very large matrices), I can look at optimizations. Probably the major one will be removing what appear to be crappy strategies for each player if obviously crappy strategies are there. (The analysis for that will probably get prohibitively complex quick.) Another one could be a reduction in resolution (rounding off a lot of the numbers and then caching results would help if there's a whole lot of values to run.)

Anyways, it's good news. On the bad side of things, I have no real chance for employment.

How often does one find themselves working on multiple projects at once, leaving some for a time (even weeks or months) before touching them again?

I'm trying to find out if I feel okay with the idea, or if it's some sort of slippery slope to slackerness.

Okay. Some time with a pad of paper and some Dew has produced the following rough outline of subroutines:

1. included -- takes the incoming value and returns firing rules and percentages of inclusion in a hash
2. fire -- takes that hash and returns the graph points for the resulting rule intersection

I'm already working out a sub called get_midpoint that takes an incoming set of graph points and produces the X value of the vertical line that would divide the shape defined by the set of graph points into two equally large shapes.

Those three combined will take the value from a set of firing rules to a graph to a final scalar value.

More math. I have a solution for the midpoint that involves a whole bunch of odd terms and whatnot, so I have to find some slightly more advanced algebra resources to determine how to resolve that to one side clean, one side icky but solvable.

Frustration about the math model is building, and I'm going to log off and spend some time with my old friend pen and paper to work the kinks out and get the proper formulas translated into Perl.

This math model may be wrong, it may be totally wrong, it may be completely counter to what Kosko meant when he wrote the damn thing, but by Chao it is going to sing and dance when I code it up.

Okay, despite every worst effort on my part I've managed to locate the heart of this thing. It's the curve-calculator that I'm about to write, and was previously an ucky thing that deviated behavior based on the number of incoming points.

Translation: I've started to work on the general method that will actually work, instead of the hacky-specific method that didn't really work at all.

I'm demoralized but I'm going to fix things later today.

The new "average" is going to be the midpoint in terms of area, the bisecting point, instead of what had previously appeared to be correct (which was the point where the curve is "average").

I wish I wasn't such a math doofus sometimes...

Okay.

Apparently I have botched the implementation of this model so badly OR I have misunderstood the model to the point that this thing is, as stands, completely unusable.

My worst nightmare is true -- I've modified things so that right-triangle rules can be handled the same way as isosceles triangle rules, but there's one firing position that produces a curve well above the average.

This can't be right. It's got to be half the volume on each side, otherwise these results can't possibly exist in some cases. Having said that, I'm going to recode it tomorrow that way.

I'm working through the one-rule now, and it's not as easy as I remembered it being when I dismissed it a week ago. Dammit.

I suppose this would be a good rule of thumb, eh? "One of the first three things you don't bother doing at the beginning of the code because it's so trivial will take up the last 30% of your time."

Anyway, it's slow going but some of the handling subs I wrote in the two-rule version are helping me through the one-rule, so I guess that's a good sign.

Okay. Two-rule dealt with. Reasonable results returning.

Now, I deal with the fact that I incompletely dealt with the single-rule case. Sigh.

On the other hand, I feel reasonably close to a model of how to deal with X rules firing, which would be a nice thing to have. Very, very nice thing.

I'm thinking also that if this thing is too slow, I may be able to write a ponder sub to precalculate values and store them in an easy lookup table. (While this violates the malleability of the fuzzy systems, it could still incorporate learning by having cycles -- experience cycles where precomputed values are compared to optimal results (as determined by actual humans, perhaps) and then a learn process where adjustments are applied to the rules through a statistical method, then another ponder cycle to precalculate for speed, then...you get the idea. Rinse, repeat.)

5 older entries...