Older blog entries for vicious (starting at number 334)

Math is a series of trivial observations

Mathematical proof is essentially a series of completely trivial observations wrapped in complicated-sounding notation (not complicated on purpose hopefully). The trick is not to understand the proof once it is written, but to notice those trivial observations to write a proof in the first place. I think this is what’s sometimes discouraging people from research mathematics. You work for two weeks on something that feels like a very hard problem, and then the solution seems trivial once found. In my case there are two operations and a limit involved. And the things you are trying to bound are not continuous with respect to that limit, so you flail around trying to do all sorts of complicated schemes. Then last night I think … hey why not do these two operations in reverse. I get rid of the limit and the problem becomes almost trivial after a bit of linear algebra. It feels good. But on the other hand it feels like: Why didn’t I think of this two weeks ago.


Syndicated 2013-03-29 16:38:56 from The Spectre of Math

Moving to Oklahoma

So, moving to OSU in Stillwater in the fall. This entails for the first time in my life quitting a job before it sort of naturally ends. The consulting gigs don’t really count I don’t think, and Eazel laid me off in the process of going under, so that counts as natural end of a job. At Red Hat it was just a summer gig, so there was no expectation of staying longer. Then at SDSU and UCSD I was a grad student, so any employment also ended when I graduated. In Urbana, they did make me send a “resignation letter”, but my appointment ended anyway as I stayed the whole 3 years I was hired for. At UCSD then I was hired for 1 year and stayed for one year. So now … I was hired for 3 and I am leaving after 2. Six years of postdoc in 3 different places are enough I think. One can always keep changing jobs every few years, but it’s nice to know I don’t have to now, I just can if I want to.


Syndicated 2013-03-11 20:06:18 from The Spectre of Math

The computation

So the computation has finished (actually a few days ago) for degree 19. I’ve only yesterday gotten around to finishing a short paper (addendum) to post to arxiv, which I’ve done yesterday, see arXiv:1302.1441. The really funky thing is that there are so many sharp polynomials in degree 19. Up to symmetry there are 16 in odd degrees up to degree 17, yet there are 13 in degree 19. And two of the new ones are symmetric, which is actually surprising, that seems it should be hard to achieve if you think about how they are constructed. There’s probably a bunch of interesting number theory that appears here. It should be fun to figure out what’s going on there.

This was the first time a paper of mine got reclassified to a different archive on arxiv. I put it into algebraic geometry because well, the motivation comes from geometry, but it got stuck into comutative algebra. Which actually makes a lot more sense. Especially since none of the motivation from geometry appears in this writeup.

Degree 21 has been running for about a week. It will probably be running for the next year or so at which point I really expect it to just spit out only one polynomial which is the group invariant one we already know about. Which would be also kind of funky since then there would be two degrees with as few polynomials as possible and in between there would be a degree with the most polynomials we have found so far in any degree.


Syndicated 2013-02-07 16:47:48 from The Spectre of Math

The correct finite field does wonders

So the computation I was running since thanksgiving was getting nowhere (finding degree 19 polynomials which are equal to 1 on x+y=1, which have fewest possible terms (11) and which have only positive coefficients, we know that there are finitely many, the trick is to find them). My code didn’t really have good status updates. Well it does tell you exactly where it is, but it takes a bit of computation to really see how it’s progressing, so I didn’t do that until a few days ago since I thought it was going way too slow.

And low and behold, it was going way too slow. I computed that it should take another 3-7 years or so (different parts of the search space take different times so it’s a bit hard to estimate). That was a bummer. That was about 10 times longer than I thought it should be. At first I was a bit resigned to this reality, but the next day I started to look into it, and one thing I figured out after running a bunch of tests was that one shortcut I was using was never triggered. The idea is that we need to find when a certain matrix with integer coefficients has 1 dimensional nullspace. Doing the integer row reduction is done with gmp, and is reasonably fast. But since we do this many times over and most of these matrices are simply full rank (no null space), we don’t really need to do the whole computation. So what we can do (and this is a useful trick to know), is to first do all computation in some small finite field, e.g. do everything mod p for some small prime p. If a matrix is full rank mod p, it is full rank. The computation can be done rather quickly this way and you don’t even have to involve modulo computation, since all the possible computations you can just precompute first and just build up a big table, so instead of two multiplications, an addition, and finding the remainder, you just look up in a table. Anyway, that gets us quite a bit of a speed up.

Now the thing is that I was using mod 19, since that worked for lower degrees. One thing I forgot when I started the run (remember this was a few years since I looked at this code and ran it last time), is that the modulus cannot be the same as the degree. The matrices we need to work with have most terms divisible by the degree. So moding out by 19 essentially always made the matrix all 0 (except for a few 1s scattered around). So these matrices were essentially always singular and the shortcut never triggered. So after doing a useless mod 19 calculation we had to do the actual integer arithmetic. That’s why it was slow. Damnit!

Well the calculations were not wrong, I just did a lot more computation than needed. After a small amount of testing it seemed that mod 23 was a good finite field to proceed in, so I restarted the code. Suddenly 3-7 years turned into first estimating 90 days and after running things for a day, that turned into an estimate of 30 days.

Then I noticed one more thing (and Danny pointed this out too), that his code used symmetry and just threw out half of the nonsymmetric polynomials, since the computations are the same. I remembered that my code didn’t do it. It didn’t make much sense if the longest run we did was 5 days on one core (for code that is only ever run once or twice, small speedups are somewhat pointless). I implemented this idea and it seems to achieve 33% reduction in time (there’s still the checking for symmetry, and there are of course symemtric polynomials, so that’s probably close to where we can get). So anyway, I guess within 20 days we should have the answer.

After it finsihes, I still have one more speedup up my sleeve. It could be that I can do the row reduction really fast mod 2 by using binary operations (each row would be an unsigned int). Not sure what speedup I can achieve though, at best 90%, since that’s how many cases mod 2 catches. While mod 23 or so catches essentially everything. So the idea is to do mod 2, then mod 23, and only then if the matrix is still singular do the integer arithmetic. If the speedup is another 50%, and my most optimistic estimates hold, that would put degree 21 within the realm of possibility, though at least half a year on 4 cores. That’s is, within something I’d be willing to run.

So, the mood went from “I’ll probably give up n d=19 soon” to “maybe d=21 is possible”. All this just by using a different prime :)


Syndicated 2013-01-21 17:26:57 from The Spectre of Math

Interesting calculation

An interesting back of the envelope calculation: UC Irvine is using the differential equations book as standard, that’s a couple of hundred students every quarter, and there are at least a few other places with large and small lectures using the book. A reasonable estimate at the current adoption, there are at least 1000 students every year who use the book. I recently checked on amazon how much does Boyce-DiPrima cost: the new edition is $184 (Yaikes!). It costs $110 just to rent for a semester (a used version was $165). That’s 110-184 thousand dollars a year saved by students, just because of one open book adopted in a couple of large lecture classes. Presumably, the adoption rate can only go up, so this number will go up, just from one of my books (savings from the real analysis book are going to be much smaller due to less students taking that kind of class).

Now I have nothing against the publishers, but they have their incentives wrong. Boyce-DiPrima is a fine book, but … There is really no reason to print big bulky books on expensive paper for these classes. Locally printed coursepacks or cheaply printed paperbacks are much more efficient. And the students might actually keep their books and they might help them along in other classes. Currently most students return their books as soon as they can to recover most of the cost. So if you’re teaching say a PDE class as I did this semester, you can hardly tell them to brush up on their calculus from their calculus book. They don’t have it anymore!

The incentive for me is to simply make the best book because I want to (makes me feel good, that’s I guess all I can expect). Since I make almost no money on it, I don’t really have to inflate page count just to make it more expensive, or add color and pretty pictures just for the hell of it (it would make the book quite a bit more expensive). Plus the book is more accessible. I already know students use the web version even from their phones.

So anyway, I guess I’m providing at least 2 to almost 4 times my salary in free books.

Anyway, if you do want to buy the books (and I make $2.50 on each, yay! I made almost $400 so far, it’s mostly for making me feel good rather than making money), here are the links to lulu:

Real Analysis, $13.09 + shipping

Differential equations, $16.78 + shipping

Yes, I’m a fan of arbitrary looking prices. Actually the reason for those prices is that I simply set the price so that I get $2.50 from the book, so it’s the (cost of printing)+2.50+(lulu’s cut).


Syndicated 2012-12-19 17:02:56 from The Spectre of Math

New versions of books and new genius

So in the last two days I’ve put up new versions of both the differential equations textbook (3 new sections, and of course some fixes) and the real analysis textbook (fixes, plus 4 new exercises). And also I’ve made a new release of genius. Actually two of those I just did today when my students were taking their final. The nice things about proctoring tests with small upper division classes is that you can get stuff done. There is no cheating going on. There’s only a few questions, so I had over two hours to burn. Next semester will be quite different. I’ll have two calculus lectures with 250+ students each. Proctoring an exam for that many students is not at all a relaxing exercise (and then there’s the grading … ugh!)


Syndicated 2012-12-17 23:47:37 from The Spectre of Math

Yet another new section in DE book

In trying to avoid bad mood and keep stress level down, people turn to hobbies. One of my hobbies is working on my textbooks, so I have written a new section on the Dirichlet problem for the Laplace equation in the circle for the differential equations book. See the draft section. The previous section 4.9 is OK, but the solution is far more natural in the circle in polar coordinates than in a square, that is we obtain

\displaystyle  u(r,\theta) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n r^n \cos(n\theta) + b_n r^n \sin(n\theta)

And then we can derive the Poisson formula which is just cool. Also it’s a good example showing more complicated change of variables since we do it in polar, and also it shows a somewhat more complicated and different separation variables.

Part of the motivation was that I did this topic in my PDE class so I had lecture notes and it really felt right for that point in the book, even to leave it as reading to interested students. The other part is that I have been improving the graphing ability of genius so I can do polar coordinates for example:

dirich10speed

That’s the graph of the solution u(r,\theta) = r^{10} \cos(10\, \theta), showing that high frequency on the boundary means fast decay as you go closer to the center of your domain.

Though there is no UI for polar coordinates, there is just a function that allows you to plot arbitrary surface data now. Notice how it’s not graphed on a square grid, but above the disc. Also notice that internal rings have fewer points on them, that’s because I just compute fewer values at smaller radii, remember I am passing in arbitrary data, a list of tripples (x,y,z). This will be in version 1.0.16, which should come out end of next week sometime (have to let translators have a go at it). Actually the reason for doing this work on genius was not polar coordinates but showing numerical solutions in my PDE class. It’s just that one of my test cases was polar coordinates and so it just clicked and I thought: I have to write up this section, it’s just too cool to pass up and I can make the graphs now.

This brings the number of pages in the DE book to 315, and the number of exercises to 533. Yaikes! It’s become a beast. I’ll make the new version in a week or two so that it’s usuable for next semester (so if you have comments on the new section do let me know quickly).

I think now a two semester course could possibly be run out of the book. What’s going to be added in the new version is essentially about 5-6 lectures. At my speed the whole thing is now approximately 65 lectures, so a bit less than two semesters worth, but if you go just a tad slower (as many people do), do more examples, and if you factor in exams, reviews, quizzes, etc… it’s just right I think. You’ve got lots of room to spare if you want a two quarter course.


Syndicated 2012-12-09 07:31:39 from The Spectre of Math

Bad memory

So I just remembered, it wasn’t that we thought the computation (see a previous post) would take half a year, it would take 450 days on 3ghz CPU. I guess my memory was being optimistic. I remembered “half a year” when it was really “one and a half a year”. OK, so the computation has now been running for a bit over 2 weeks now on 4 cores. I guess I’m at least 10% there (I hope). It looks a bit worse from the output. It doesn’t seem computers have gotten all that much faster (not at all it seems at least on the load I am trying to do) in the past few years. The only thing better is more cores.


Syndicated 2012-12-07 06:47:54 from The Spectre of Math

Frobenius method and Bessel functions

I had occasion to talk about Bessel functions and mention the Frobenius method in my PDE class and I realized that I do not have any mention of this in the book. This was the section I did not quite get to when teaching at UCSD, so it never got written. Well, worry no more. I’ve written up a draft version of the section. This will appear in the next version of the book whenever I make it, though if you do have comments, do let me know. It’s good to catch typos or make changes now.

This brings the number of pages to 307 together with new delta function section and the number of exercises to 521. Yay!

This also made me realize that Genius did not have Bessel functions implemented. They were actually easy to implement as MPFR has them done. At least for integer orders and real values anyway. Then as my current working directory of genius was such a mess with trying to include LAPACK, I decided to remove LAPACK for now from the genius git. I think what I will do is link to the fortran version at some later point. It seems like the fortran LAPACK is available almost everywhere, so it should not be a bad new dependency. Much easier than trying to make the beast compile cleanly inside genius. Anyway, so Bessel functions will be in Genius, which I think I ought to make a release of soon as there are a bunch of other small changes to set upon the world.


Syndicated 2012-11-27 18:57:39 from The Spectre of Math

“Maxima is calculating”

So friday afternoon I wanted to test for existence of a certain mapping that takes one surface to another surface. Everything is algebraic so one might assume if a mapping exist it might actually be polynomial and since everything is of low degree, the mapping might be as well. So I just set up brute force equations and tried an arbitrary degree 2 mapping. After a second or two, maxima returned no solutions to the resulting system. OK, so how about plugging in degree 3. It turns out I don’t need to test the linear terms, and there are 3 variables so 16 variables per component so I get an algebraic system in 48 variables. Sounds bad, but lot of the equations become something of the form “x=0″. So I looked at a subset of the system. Already the generating of the equations took a few seconds. So I thought, this will take a few minutes. So I started “algsys” on the equations. Well, that was wednesday afternoon. It is Sunday and the thing is still running. Unfortunately it just says “Maxima is calculating” in the wxMaxima window, so one has no clue if it will take another day or so, another year or so, or if the sun will implode first. I sort of have the feeling it is doing something stupid. Once I get more time for math on monday, I’ll probably try to simplify the equations by hand first. I could also try for the solution (or lack thereof) numerically. In the meantime I’ll let it run. This is on my laptop which is surely not meant as a computation machine. It’s only running on one core so it’s not heating up too badly. When I was running some computations for days in the summer on all four cores you could almost cook eggs on the keyboard.

On a related front, I decided that my work computer is sitting too idly so I started the degree 19 calculation that we never did with Danny on our paper [1]. In 2008 we thought it would take at least half a year. Presumably the computers have gotten a tad quicker in the meantime (and since I’m running it on 4 cores), so perhaps the result will come in sooner. Still the progress seems slow from the output so far. It is a bit difficult to judge, I’ll try to estimate time left more precisely later on, but just as first guess from looking at the output I don’t think this will be done before christmas.

There is something magical about pressing ENTER to start the computation you know will take months to complete. It is one of the few places where you really use the fact that you have a fast computer. Most computer power is totally wasted. So for example in somewhat similar time frame Firefox managed to get 70 minutes of CPU time (maxima is up to 5208 now). Now that’s with only very occasional short browsing over the last few days. It seems mostly it’s the tabs being open that eat up time, run the CPU and heat our house. Come to think of that my office will be quite warm I bet once I get there on monday, I don’t think the heating runs on the room termostat, as the swich on that thing is in “off” position and it still heats the room. So with the added heating from 4 cores running at top speed and it being a small room, it should get toasty.

[1] Jiří Lebl and Daniel Lichtblau, Uniqueness of certain polynomials constant on a line, Linear Algebra and its Applications, 433 (2010), no. 4, 824-837, arXiv:0808.0284.


Syndicated 2012-11-25 15:57:59 from The Spectre of Math

325 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!