Older blog entries for rcaden (starting at number 80)

Creating PHP Web Sites with Smarty

I recently relaunched SportsFilter using the site's original web design on top of new programming, replacing a ColdFusion site with one written in PHP. The project turned out to be the most difficult web application I've ever worked on. For months, I kept writing PHP code only to throw it all out and start over as it became a ginormous pile of spaghetti.

Back in July, SportsFilter began crashing frequently and neither I nor the hosting service were able to find the cause. I've never been an expert in ColdFusion, Microsoft IIS or Microsoft SQL Server, the platform we chose in 2002 when SportsFilter's founders paid Matt Haughey to develop a sports community weblog inspired by MetaFilter. Haughey puts a phenomenal amount of effort into the user interface of his sites, and web designer Kirk Franklin made a lot of improvements over the years to SportsFilter. Users liked the way the site worked and didn't want to lose that interface. After I cobbled together a site using the same code as the Drudge Retort, SportsFilter's longtime users kept grasping for a delicate way to tell me that my design sucked big rocks.

PHP's a handy language for simple web programming, but when you get into more complex projects or work in a team, it can be difficult to create something that's easy to maintain. The ability to embed PHP code in web pages also makes it hard to hand off pages to web designers who are not programmers.

I thought about switching to Ruby on Rails and bought some books towards that end, but I didn't want to watch SportsFilter regulars drift away while I spent a couple months learning a new programming language and web framework.

During the Festivus holidays, after the family gathered around a pole and aired our grievances, I found a way to recode SportsFilter while retaining the existing design. The Smarty template engine makes it much easier to create a PHP web site that enables programmers and web designers to work together without messing up each other's work.

Smarty works by letting web designers create templates for web pages that contain three things: HTML markup, functions that control how information is displayed, and simple foreach and if-else commands written in Smarty's template language instead of PHP. Here's the template that display SportsFilter's RSS feed:

<?xml version="1.0" encoding="ISO-8859-1"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>SportsFilter</title>
    <link>http://www.sportsfilter.com/</link>
    <description>Sports community weblog with {$member_count} members.</description>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <atom:link rel="self" href="http://feeds.sportsfilter.com/sportsfilter" type="application/rss+xml" />
{foreach from=$entries item=entry}
    <item>
      <title>{$entry.title|escape:'html'}</title>
      <link>{$entry.permalink}</link>
      <description>{$entry.description|escape:'html'}</description>
      <pubDate>{$entry.timestamp|date_format:"%a, %d %b %Y %H:%M:%S %z"}</pubDate>
      <dc:creator>{$entry.author}</dc:creator>
      <comments>{$entry.permalink}#discuss</comments>
      <guid isPermaLink="false">tag:sportsfilter.com,2002:weblog.{$entry.dex}</guid>
      <category>{$entry.category}</category>
    </item>
{/foreach}
  </channel>
</rss>

The Smarty code in this template is placed within "{" and "}" brackets. The foreach loop pulls rows of weblog entries from the $entries array, storing each one in an $entry array. Elements of the array are displayed when you reference them in the template -- for example, $entry.author displays the username of the entry's author.

The display of variables can be modified by functions that use the "|" pipe operator. The escape function, used in {$entry.title|escape:'html'}, formats characters to properly encode them for use in an XML format such as RSS. (It's actually formatting them as HTML, but that works for this purpose.)

Because Smarty was developed with web applications in mind, there are a lot of built-in functions that make the task easier. SportsFilter displays dates in a lot of different forms. In my old code, I stored each form of a date in a different variable. Here, I just store a date once as a Unix timestamp value and call Smarty's date_format function to determine how it is displayed.

Smarty makes all session variables, cookies, and the request variables from form submissions available to templates. In SportsFilter, usernames are in $smarty.session.username and submitted comments are in $smarty.request.comment. There also are a few standard variables such as $smarty.now, the current time.

To use Smarty templates, you write a PHP script that stores the variables used by the template and then display the template. Here's the script that displays the RSS feed:

// load libraries
require_once('sportsfilter.php');
$spofi = new SportsFilter();

// load data
$entries = $spofi->get_recent_entries("", 15, "sports,");
$member_count = floor($spofi->get_member_count() / 1000) * 1000;

// make data available to templates
$smarty->assign('spofi', $spofi);
$smarty->assign('entries', $entries);
$smarty->assign('page_title', "SportsFilter");
$smarty->assign('member_count', $member_count);

// display output
header("Content-Type: text/xml; charset=ISO-8859-1");
$smarty->display('rss-source.tpl');

Smarty compiles web page templates into PHP code, so if something doesn't work like you expected, you can look under the hood. There's a lot more I could say about Smarty, but I'm starting to confuse myself.

There are two major chores involved in creating a web application in PHP: displaying content on web pages and reading or writing that content from a database. Smarty makes one of them considerably easier and more fun to program. I'm fighting the urge to rewrite every site I've ever created in PHP to use it. That would probably be overkill.

Syndicated 2009-01-14 19:39:18 from Workbench

Peace Declared Between Myself and Sweden

As it turns out, Sweden did not intentionally declare war on my web server earlier this month. Programmer Daniel Stenberg explains how the international incident happened:

A few years ago I wrote up silly little perl script (let's call it script.pl) that would fetch a page from a site that returns a "random URL off the internet." I needed a range of URLs for a test program of mine and just making up a thousand or so URLs is tricky. Thus I wrote this script that I would run and allow to get a range of URLs on each invoke and then run it again later and append to the log file. It wasn't a fancy script, but it solved my task.

The script was part of a project I got funded to work on, that was improving libcurl back in 2005/2006 so I thought adding and committing the script to CVS felt only natural and served a good purpose. To allow others to repeat what I did.

His script ended up on a publicly accessible web site that was misconfigured to execute the Perl script instead of displaying the code. So each time a web crawler requested the script, it ran again, making 2.6 million requests on URouLette in two days before it was shut down.

Sternberg's the lead developer of CURL and libcurl, open source software for downloading web documents that I've used for years in my own programming. I think it's cool to have helped the project in a serendipitous, though admittedly server destroying, way.

To make it easier for programmers to scarf up URouLette links without international strife, I've added an RSS feed that contains 1,000 random links, generated once every 10 minutes. There are some character encoding issues with the feed, which I need to address the next time I revise the code that builds URouLette's database.

This does not change how I feel about Bjorn Borg.

Syndicated 2008-12-30 16:38:51 from Workbench

Using Treemaps to Visualize Complex Information

I spent some time today digging into treemaps, a way to represent information visually as a series of nested rectangles whose colors are determined by an additional measurement. If that explanation sounds hopelessly obtuse, take a look at a world population treemap created using Honeycomb, enterprise treemapping software developed by the Hive Group:

World population treemap screenshot created by Honeycomb, the Hive Group's treemapping software

This section of the treemap shows the countries of Africa. The size of each rectangle shows its population relative to the other countries. The color indicates population density, ranging from dark green (most dense) to yellow (average) to dark orange (least dense). Hovering over a rectangle displays more information about it,.

A treemap can be adjusted to make the size and color represent different things, such as geographic area instead of population. You also can zoom in to a section of the map, focusing on a specific continent instead of the entire world. The Honeycomb treemapping software offers additional customization, which comes in handy on a Digg treemap that displays the most popular links on the site organized by section.

By tweaking the Digg treemap, you can see the hottest stories based on the number of Diggs, number of Diggs per minute and number of comments. You also can filter out results by number of Diggs, number of Diggs per minute or the age of the links.

I don't know how hard it is to feed a treemap with data, but it seems like an idea that would be useful across many different types of information. As a web publisher, I'd like to see a treemap that compares the web traffic and RSS readership my sites receive with the ad revenue they generate. The Hive Group also offers sample applications that apply treemaps to the NewsIsFree news aggregator, Amazon.Com products, and iTunes singles. This was not a good day to be a Jonas Brother.

Syndicated 2008-12-23 22:48:54 from Workbench

Finding Updated Feeds with Simple Update Protocol

FriendFeed is working on Simple Update Protocol (SUP), a means of discovering when RSS and Atom feeds on a particular service have been updated without checking all of the individual feeds. Feeds indicate that their updates can be tracked with SUP by adding a new link tag, as in this example from an Atom feed:

<link rel="http://api.friendfeed.com/2008/03#sup" href="http://friendfeed.com/api/sup.json#53924729" type="application/json" />

The rel attribute identifies an ID for the feed, which is called its SUP-ID. The href attribute contains a URL that uses JSON to identify updated feeds by their SUP-IDs. There's also a type attribute that contains "application/json" to indicate the content type at the linked resource.

Developer Paul Bucheit makes the case for the protocol on FriendFeed's blog. "[O]ur servers now download millions of feeds from over 43 services every hour," he writes. "One of the limitations of this approach is that it is difficult to get updates from services quickly without FriendFeed's crawler overloading other sites' servers with update checks."

My first take on the idea is that defining a relationship with a URI is too different than standard link relationships in HTML, which employ simple words like "previous", "next", and "alternate". When new relationships have been introduced, they follow this convention, as Google did when it proposed nofollow.

Also, neither RSS 1.0 nor RSS 2.0 allow more than one link tag in a feed, so the SUP tag only would be valid in Atom feeds.

Both of these concerns could be addressed by identifying the SUP provider with a new namespace, as in this hypothetical example:

<rss xmlns:sup="http://friendfeed.com/api/sup/">
<channel>
<sup:provider href="http://friendfeed.com/api/sup.json#53924729" type="application/json" />
...

Six Apart has offered an alternate solution that seems more likely to work for large hosting sites and constant feed-checking services like FriendFeed. The company produces an update stream of Atom data indicating an update on any of the thousands of TypePad or Vox blogs.

Another potential solution would be to borrow the technique used by Radio UserLand blogs to identify a list of recently updated sites: Add a category tag to the feed with the value "rssUpdates" and a domain attribute with the URI of XML data containing the list:

<category domain="http://rpc.weblogs.com/shortChanges.xml">rssUpdates>/category>

The XML data is in the weblog changes format used by Weblogs.Com.

Syndicated 2008-12-06 16:40:59 from Workbench

Customizing Apache Directory Listings with .htaccess

I was clearing off my desk today when I found an article I've been meaning to scan and send to somebody -- the story of how my friends almost elected a dalmatian and squirrel to the homecoming court of the University of North Texas in 1989. The alumni magazine wrote a feature on Hector the Eagle Dog and Agnes the Squirrel's campaign, which attracted national media and made a few of the human homecoming candidates very angry.

I can never tell when a file's too big to send in email without aggravating the recipient, so I upload files to my server and email the links instead. I decided to make this process easier by creating a clippings directory where uploaded files show up automatically.

The Apache web server can publish a listing of all files in a directory, as the official Apache site does in its images subdirectory. I wanted to make my clippings page look more like the rest of my weblog, so I found a tutorial on customizing directory listing pages.

First, I created an .htaccess file in the directory and turned directory indexing on with this command:

Options +Indexes

This command only works on servers that are configured to allow users to change options. For security reasons, I turn directory listings off by default, so they only appear when I specifically configure a directory to reveal its contents.

Next, I created header and footer web pages that contain the HTML markup to display above and below the directory listing. These files are identified by two more commands in .htaccess:

HeaderName header.html
ReadmeName footer.html

These web pages are located in the clippings directory. For the final step, I added a description of PDF documents and made sure that the header and footer files are not included in the listing:

AddDescription "PDF Document" .pdf
IndexIgnore header.html footer.html

There's a lot more that can be customized in an Apache directory listing, as the tutorial demonstrates, but for my project it seemed like overkill.

Update: Alternatively, I could've checked to see if the story was already online. Auugh.

Syndicated 2008-11-27 00:30:05 from Workbench

Sharing Bookmarks and Feed Lists with XML

I'm working on a programming project that requires an XML format to represent bookmarks and other collections of URIs, but before I reinvent the wheel I'd like to see if there's an existing format that meets my goals. The format should be able to hold all of the following information:

There are several potential formats that could be put to use: XBEL, the outline formats OPML and XOXO and the syndication formats RSS and Atom. Each has drawbacks, as I'll go over in upcoming posts here on Workbench.

I'm starting with XBEL, because that's the best-supported format specifically designed to hold bookmarks. XBEL was created in 1998 by members of the Python community led by Fred L. Drake Jr. XBEL 1.0 continues to be the only release, though there's occasional talk on the XBEL-Specs mailing list about developing a new version.

XBEL was designed to represent browser bookmarks and has become the native format for storing them in the Konqueror and Galeon browsers. There are add-ons that extend XBEL support to more popular browsers -- one example is SyncPlaces, a Firefox add-on that can manually import and export XBEL bookmarks.

Here's what a bookmark looks like in XBEL data produced by SyncPlaces:

<bookmark id="row123" added="2008-11-25T17:30:22.352" modified="2008-11-25T17:30:22.522" href="http://workbench.cadenhead.org/">
  <title>Workbench</title>
  <info>
    <metadata owner="Mozilla" dateadded="1227634222352963" lastmodified="1227634222522963"/>
  </info>
  <desc>Rogers Cadenhead's personal weblog</desc>
</bookmark>

Bookmarks in XBEL can be grouped into folders, which themselves can contain more folders to create a hierarchy. The format's well-designed and can be extended by namespaces or the metadata element, which in the preceding example carries Firefox-specific information.

There are several drawbacks to using XBEL. The format predates social bookmarking and lacks support for tagging bookmarks or assigning them to categories like the ones employed by the Open Directory Project.

XBEL also predates the popularity of syndication, so there's no way to identify that bookmarks are RSS or Atom feeds. You also can't establish a relationship between a web site's home page and its feed. A few years ago on XBEL-Specs I floated the idea of adding type and rel attributes to bookmarks that function like they do in Atom, which would be all that's required to publish blogrolls and feed subscription lists with the format.

XBEL can't be used for web directories, feed lists or social bookmarks without extending the format. I think all three are strong enough use cases to be part of a bookmark format's core set of elements. If I choose XBEL, most of my project's functionality won't be supported by today's XBEL tools or client libraries, which is the primary reason to adopt an existing format.

Syndicated 2008-11-25 19:42:31 from Workbench

CBS Takes 'Ex List' Off Schedule

CBS has pulled The Ex List off its schedule, which is good news for my TV Death Pool:

Eye has yanked the drama off the sked, effective this Friday. A repeat of NCIS will air in its place.

Decision comes after The Ex List averaged 5.3 million viewers and a 1.5 rating/5 share in its final airing, last Friday. The Ex List repped CBS' weak link on Friday nights, where Ghost Whisperer and Numbers both won their hours.

The Ex List had an idea that was better in concept than application. A single woman (Elizabeth Reaser) is told by a psychic that she has one year to find her true love or end up alone, and the guy's somebody she already dated.

Reaser's an appealing actress as the unlucky-in-love woman, but every week she chased after -- and usually bedded -- some old boyfriend who had become a stranger to her. So there was a new male guest star every week, just like on Love Boat, but he wasn't just climbing aboard a boat.

Syndicated 2008-10-28 14:55:56 from Workbench

Cyber-Cowboy Post-Apocalyptic Kung Fu

I found a great "cyber-cowboy post-apocalyptic fu" music video on another blog this morning. Watch for the appearance of Col. Wilma Deering, the Planet of the Apes Statue of Liberty and the film crew in a mirror:

This video for Muse's "Knights of Cydonia" is the work of Joseph Kahn, a prolific music video director whose next project is a film based on William Gibson's Neuromancer. (Via Stan!.)

Syndicated 2008-10-28 13:44:11 from Workbench

Local Blogger: 'Barack Obama Loathes My Kind'

I mentioned earlier that some of my neighbors in North Florida are having trouble accepting the possibility of an Obama presidency. One of them is Kim "Velociman" Crawford, who's going to flee to the Georgia mountains if Obama wins:

... I firmly believe Barack Obama absolutely loathes my kind. This man will not be content to win the presidency. He will spend his waking hours thereafter not pursuing the legitimate goals of state, but punishing those who would dare to oppose him. ...

Did I mention this man hates me? You and me? Yes he does. Why? Because he can. Yes He Can. Beneath that cool persona is a megalomaniac. Cool? Like Stalin after a purge, emotionally and sexually spent. Like Saddam after a torture session, dozing in his chair with someone's genitals curled in his fist. Like Pol Pot after a petit mal seizure, mumbling a litany of the dead. Cool that way.

So I will cast my pathetic vote, and ramp up my relocation to the mountains. Reduce my footprint. Carbon? That will be a nice byproduct, but I mean my personal footprint. My credit footprint. My interface with authority footprint. I'm researching micro-hydro water turbines for that stream, windmills for water, a half-acre patch for vegetables, a few goats, and a bison. Just because I want a fucking bison.

Velociman's our region's greatest crank, which ought to be an official ceremonial position like poet laureate. One of my favorite posts of his gave a primer on how to speak Southern:

... we talked like many people in southern or rural areas talk. You make eye contact when you address each other, then you look down, at the ground, and spit in the grass, and rub it absent-mindedly with the toe of your shoe. As if to say, I enjoy your company, but not that much. I ain't gay, trucklehead! Talk, spit, rub. Had many a conversation doing that.

Syndicated 2008-10-27 13:46:08 from Workbench

Apache HTTP Server 2.2.10 Released

This afternoon I upgraded the servers that run the Drudge Retort and SportsFilter to Apache 2.2.10, a minor upgrade released on Oct. 15 that fixes a cross-site scripting (XSS) vulnerability in FTP URLs discovered by Marc Bevand of the network security company Rapid 7.

The rest of the changes in the new version look like minor bug fixes.

I compile the Apache web server from source code on both servers, a process that was difficult the first time around but has been easy since then. After I download a new version, I upgrade with three commands:

  1. ./configure --prefix=/usr/local/apache2 --enable-rewrite --enable-so
  2. make
  3. make install

Syndicated 2008-10-27 01:10:15 from Workbench

71 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!