Older blog entries for gabe (starting at number 7)

woo. hm. been a while since i last posted.

well. i am beginning to understand more and more about OpenACS every day. and now, thanks to jadeforrest's suggestion (an individual in #openacs on freenode) i'm keeping a log of my learning process in order to better be able to help others in the future. i kinda wish i'd thought of that from the start. i may have a better idea of how to make others understand the system better.

regardless, OpenACS is a truely wonderful system. i was using Joel Aufrecht's excellent development tutorial for OpenACS to get started in figuring out how to produce packages for oacs. after going through that tutorial i was able to quickly add new functionality to that package. it's really quite easy to rapidly develop apps with oacs.

so now that i've pretty much decided to use OpenACS for my own projects, i need to change my hosting to one that provides OpenACS and PostgreSQL hosting. i haven't yet decided, but both zill.net and acorn hosting look like very nice, small, hosting companies. not as cheap as pghoster.com is, but i am getting a lot more out of it...

i've been discussing development of a sync application for the Zaurus on OSX on the zaurus-osx mailing list. seems like a few people are already at it, but they're writing the app in Python. personally, i don't really like the idea of forking/duplicating efforts, but i'd rather have a real app, written using cocoa to sync my zaurus and powerbook. that, and i want it to do more than others do, like maintaining a package feed locally on my pb and being able to transfer and install / remove packages on the zaurus.

BEN! in regards to AOLServer, you could setup filters to handle access to bugzilla. I don't know if you need it to work exactly the same way as a .htaccess, which it doesn't, but it may work well enough for you.

I moved my website over to pghoster.com so I can finally control everything myself. For $10/mo, it's a damn good deal. I get access to both MySQL and PostrgeSQL databases, neat. Now all I need to do is create a website. I think it's time to scrap the old one since it's been about two years since I touched it last. *sigh*

In other news, I've discovered a new toy I want: a SPARCStation 20 w/ quad Ross 200MHz hyperSparc cpus, a full 512MB (not very much for a server, but enough to play with) and a decent disk. I used to have a sparc 20 when I was working at Netscape and it was a nice little box. I kinda want an HP-UX box too. I think this has something to do with reading Phil's literature.

I'm beginning to think twice about using OpenACS for our new system at work. AOLServer & Tcl are still in, but OpenACS seems to be a little too much for our needs. Our current PHP-based code gets about 10 hits per second on a quicksilver g4 (db is on a dual 800 filled with ram and such). I'm not terribly happy with it. I wish there was more literature out there comparing different systems for servicing web applications and such. E.g., which hardware, OSes, configurations, software, etc. are the best for what and why. Google's definately a help, and there are a few o'reilly books that seem to be relative (Server Load Balancing and High Performance Computing) but not much else...

AAAAAAAAAAAAHHHHHHHHHHHHHHH

Can someone please modify reality so that days consist of 48 hours instead of 24? Thanks! (or maybe some day Modafinil will be declared 100% safe w/out side effects and i'll only have to sleep 8 hours out of every 48, haha)

Where does all of my time go? I get up, go to work, come home, sleep, get up, go to work... On occasion I have some time to read, program, play games.

I managed to actually have a weekend. TWO DAYS without doing anything but messing around on the computer, playing GTA3, and reading The Stand (the uncut version). I did manage to get PostgreSQL / AOLServer and OpenACS playing nicely on my iMac at the office. (Gosh it's nice to have Mac & unix together in one machine so I can remotely login and get stuff done) I had to tweak the sysctl settings in the startup scripts to allow programs to use more than 4MB of shared memory (this made PostgreSQL much happier). I managed to get AOLServer 3.3.1ad13 all setup and such. And I managed to get OpenACS cvs (4.6 beta?) up and running to. But I'm not sure what to think of it yet. It's a neat system, but it's way complex. AOLserver is an incredible webserver.

My only issue with it all is performance. AOLServer itself manages about 560 pages per second with a static html document of 8KB. I benchmarked that using 'ab' from another machine on the LAN. I think I had to set AOLServer to startup 200 threads and keep constantly alive. I don't know what the default is, but I used that and got about 5 hits / second with OpenACS. I tried running it with 50 threads and got about 1 hit per second. I think that was killing the CPU though. ;) Interestingly enough, when I started AOLServer with OpenACS and 200 threads, it was the first time I got my load avg over 100 (700Mhz iMac G4 w/768MB ram). Fun!

*sigh*

Ok, I'm back. Moved into a new apartment, finally sold my iBook and got a Pismo. When I've got some money I'll upgrade it to a G4. I got an iMac G4 at the office to play with now too.

Got some good work done on my import code for SRM recently. I added an additional function to the SRM server, srm_user_function_list(), so that the import code won't have to pick the user defined functions out of the list returned by srm_function_list(). The import code, in general, works. Right now it's just hooked up to a simple handler function that just prints the name of the function that was called. Adding the entries to the executor's function_table works fine, and I've added a HashTable to the SRM PHP extension's thread-safe globals to keep track of the imported functions and which connection they were imported from.

I have to add code to the module initialization call to alloc and init the HashTable and then more code to the request shutdown and module shutdown calls to handle the removal of the imported functions from the executor's function table. Probably have a bit more stuff to clean up too. Then it will be ready to really test it.

I've been learning a bit about using GDB throughout my diddling with SRM. I have to say that it's neat. I've also been learning about trying to deal with the Zend Engine's memory management and such. It's nice that it informs you of memory leaks when it shuts down. (well, I'm just testing it with the CLI SAPI, I wonder what happens when you leak with the apache SAPI or others?)

I've also changed the import interface too. It now works like this:

$srm = new SRM('/tmp/srm.socket');

$srm->import_library(); // import all library functions

$srm->import_library("my_func"); // import function "my_func"

$srm->import_library(array("func1", "func2")); // import a few select functions

13 Jul 2002 (updated 13 Jul 2002 at 15:18 UTC) »
12 Jul 2002

Ok, so this is for last night, I wasn't able to connect to write...

So I finalized my ideas / proposal for a feature for SRM to be able to import library functions into the running script's function table. It turned out that the import function was the best method to use. Much thanks to Derick again, for answering all of my questions about the inner workings of SRM and PHP. So, I started working on adding this to the SRM extension. Basically, this is all there is to it from the PHP side:

$srm = new SRM('/tmp/srm.socket');

$srm->import_library(); // import functions // OR $srm->import_library(true); // import functions, and overwrite functions that may already exist

And that's pretty much all there is to it. I've got the code to handle the method call now, and I just have to clone and slightly modify the SRM class's method handler.

So some well spent time digging around in the xdebug PHP extension, and more consultation with Derick have lead me to look into the executor part of the Zend engine for where I can properly redirect function calls. I'm at odds know as to how to approach this though. It's probably easiest to just wrap the zend_execute() function call like xdebug does and check the function hash table to see if a function call exists or not. But I still keep thinking about the possibility of just providing a method in the SRM object to "import" the functions from the SRM library into the running PHP SRM client's function hash table and have a handler function to redirect those calls to SRM. There's also the possibility of merging both of those approaches into a wrapper for zend_execute() the "learns" about SRM functions by checking if they exist in SRM, and if they do, add them to the function hash and then the function call handler will take care of calling them. I guess it all depends on what makes the most sense - so I guess I'll poll the mailing list and see what folks think.

So we've now officially begun selling our product at work today - which means this is the first (real) job I've had with a company that has produced, marketed, and sold an actual product. This is also the longest I've been at any job, the most fun I've had, the most learning & maturing I've done, and overall the most positive work experience so far. All thanks to the wonders of open source software such as PHP, Apache, MySQL, etc.

Two weeks until we move into our new apartment. :)

Well, another round of massive changes to our site at work, combined with another random GTA3 binge, and apartment hunting have delayed me from having any damn space time lately. Excuses, excuses... sheesh.

So, I think I've indefinately put off the netinfo extension for PHP in favor of doing some work with SRM. At some point when I joined the php-qa mailing list earlier this year I inevitably ran across a link to SRM in Derick's signature. A while later I asked some questions on php-dev, somewhat related to app. server type functionality with PHP, and got another pointer to the SRM. So I finally decided to download it any play with it last week, and I think it's absolutely the best thing for PHP that I've seen in a while.

I've spent the past week tearing it apart on the inside, and with help from Derick, Dan, and Jani got it working on my iBook finally. I learned a bit about Apple's dyld linker in the process too. And Derick has been helpful answering my frequent barrages of questions about SRM, PHP/Zend, gdb, etc.

So, I've figured out a few things I'd like to do to help out SRM: first is that it needs a decent authentication system, second is that I'd like for there to be a way to call SRM library functions without having to call them as methods to an SRM class object. (makes for cleaner code that one can use with & without SRM).

I have to wait on the first task until Derick is finished with his new network protocol abstraction layer. (so SRM can process requests via its native socket interface, HTTP, XML-RPC/SOAP, etc.) At some point I'd like to create an SMTP plugin (mostly to see if I can get SRM to handle bounced emails and deactivate accounts, or note whom note to send messages to etc, in our system at work), but that's another story. I'd like to create a new authentication module for SRM that these protocol plugins can make use of so that users can setup which protocols are authenticated from where, etc. There was some mentioned on the mailing list about using SRM in a hosted environment... After much thought, I think the best idea would be to just run an instance for each user - let them set it up themselves if they want to run it.

On to the second task then... figuring out how to make PHP (as an SRM Client) pass function calls on to SRM instead of having to call them through the SRM object. Well, there's two ways to go about this that I can think of: create a Zend extension to pass off failed function calls to SRM, or to somehow have the SRM extension for PHP add SRM library functions to the PHP/Zend function table, and then handle those calls directly. I've only investigated the first idea so far, after much questioning of Derick and then other folks on #php.bugs (thanks Andrei, etc.), so that's what I've started playing with. Derick pointed me to his xdebug PHP extension as a place to investigate function call handlers for the Zend Engine - so that's where I've started. I've torn up the xdebug extension to learn more about ZE, and it's pretty damn cool. I'm hoping I'll have at least figured out how to handle task #2 (properly) by the end of the week.

15 days until we get to move into our new apartment in Newburyport, Ma. and I can actually get BANDWIDTH there! No more modem! I even get a choice between cable or dsl. How cool is that? We're actually saving a bit of money too, so I'm hoping I can get an eMac for home and then run that as a server on a DSL line and perhaps provide some space for Derick and other SRM folks to test/develop SRM in OSX. After that, I think I've finally decided to sell the iBook and get an older Pismo Powerbook G3 and get it upgraded to a G4. The only thing that makes it not as good as a TiBook is the graphics card - but I honestly don't ever play games in OSX anyways... so, it's the perfect fit, and cheap too!

4 May 2002 (updated 6 May 2002 at 20:52 UTC) »

After about two years I finally have access to my advogato account again. (Thanks Raph!) I created it back in 1999 or 2000 when I was living in California. Then I forgot my password at some point and well.. here I am.

I spent some time tracking down folks I know on advogato and certifying them as the lunatics they are.

I started some work on my NetInfo extension for PHP a few weeks ago. It took a while to track down the proper API to use because Mac OS X's documentation SUCKS! Thank goodness for Darwin. I got a pointer to check out the /usr/include/netinfo stuff and I stuck with that API. Someone had previously pointed me to the directory services API in OSX, which is a diseased beast that needs to be incinerated and never seen by anyone again.

So here's the API I've come up with for this NetInfo extension:

  • ni_open(domain [, user, pw [, host [, timeout]]])
  • *ni_close()
  • ni_resync(domain [, tries])
  • ni_statistics(domain)
  • ni_domainname(domain)
  • ni_rparent(domain)

  • ni_dir_create(conn, dir)
  • ni_dir_destroy(conn, dir)
  • *ni_dir_rename(conn, dir, new)
  • ni_dir_list(conn, dir)
  • ni_dir_properties(conn, dir)

  • ni_prop_rename(conn, dir, prop, new)
  • ni_prop_create(conn, dir, prop, value)
  • ni_prop_destroy(conn, dir, prop)
  • ni_prop_append(conn, dir, prop, value)
  • ni_prop_merge(conn, dir, prop, value)
  • ni_prop_values(conn, dir, prop)

  • ni_value_insert(conn, dir, prop, value, index)
  • ni_value_destroy(conn, dir, prop, value)
  • *ni_value_append(conn, dir, prop, value)

  • *ni_find_dirs_with_prop(conn, prop) -- find directories that have a property
  • *ni_find_dirs_with_prop_value(conn, prop, value)

* = non-existant in netinfo C API, something i need to create

Notes:
functions will not be required to have conn passed to them once a connection has been initiated with ni_open() successfully once. it will be optional.

This seems like it will be a good project for me to do to get back into the habits of writing C code. I found an awesome book at B&N called "Pointers on C". It's the best damn programming book I've ever read, not that I've finished it yet... So, good luck to me on getting this done. Perhaps I can get it done within a month and maybe get it included in PHP 4.3 since that's supposed to be the first official OSX-supported release.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!