Older blog entries for prozac (starting at number 34)

A "Username/Password" Cookie?

Most Websites these days have "forums" or allow for user "comments" etc. And most of them do not allow posting without "logging" in or becoming a "member".

I dislike the thought of constantly having to create yet another online account every time I feel that a site is interesting enough to want to participate. It is a complicated process: I go to a site, feel that I want to post a comment, get a "you must register" message, adjust my Browser settings to allow cookies for this site, sometimes I have to allow the site to use Javascript, go through a form or two providing name and e-mail address, sometimes more, submit form and wait for e-mail confirmation, get e-mail, now I can log in and "participate" in the "community" by posting my silly little comment.

But all this can be automated.

And all proposals that I have seen seem like HUGE CONGLOMERATIONS of PARADIGMS of OBJECT ORIENTATION and other high-falutin computer science precepts.

Well, what if there was something like the how COOKIEs are stored and transfered that allowed for the transfer of some kind of USERNAME and PASSWORD that sites can read from our computers?

I see an ASCII file format like an LSM:

Username: Jones
E-Mail: jones@adventure.org
Password: xn00Hg&6lklj(08jhss896

Where the password is a one-way hash and when the Browser initially negotiates with a Website that hash value is what is transfered and the correctness of the password is then checked via standard HTTP Authentication.

I mean, it's got to be as simple as this? Can't it?

PHPyu

No name change. The secondary names I thought of are already in use. (It is very important to check for names in use or even similar.)

Redhat

I just installed Redhat v8.0. Nice. But what's with the use of "Wizards"???

I think Redhat/Linux developers should use the name "Guru", as in "Internet Connection Guru". Much better and not so blatantly Microsoft.

I just figured it was common practice to add structure members to the end of the structure so as to not break existing code initializations.

Alas, the Linux 2.4 file_operations device driver structure has a new member, struct module *owner inserted at the beginning of the struct! Ugh!

I wasted several hours debugging/porting a 2.2 driver that had all of the function pointer members off by one!

(Partly due to a stupidism on my part: I ignored some compiler warnings about it--shame on me. Sometimes my brain just does not work like other people's.)

[I know about the abreviated GCC way of initializing structs but this was existing code written by someone else.]

I think it would be cool if more news and magazine Websites (those which have new content each day in the form or articles) had syndication like Advogato: http://www.advogato.org/rss/articles.xml

Imagine if one could go to, say, CNN.com, and get it's site in RSS/XML--just the content and no HTML, images, etc. If I have an RSS/XML client I could download content so much faster and view in my own way, in my own style. I could download and archive for later viewing. The Net would be less congested (perhaps).

Of course, there wouls have to be some sort of money-making scheme to get it all to work...

An Adaptive Sorting Technique for the Web

Web-sites such as news.google.com can provide a more personalized presentation to visitors by applying a simple sorting technique.

Long lists of information--principally of Web-links--can easily be sorted by probable interest so that links a user may be most interested in gravitate toward the top of the page.

Google's news page is a good example to explain how to apply this technique. (http://news.google.com/)

Google's news page lists news article links from many disparate yet related sources--from traditional new sources such as Time and CNN to online news such as Salon.com and Slashdot.org. The list is sorted in (what appears to be) an arbitrary manner.

There are two levels to Google's sorting approach; links are categorized first--Political, Entertainment, Science, Sports, etc. Each category is sorted by publication time, latest first. What Google has (intentionally or not) basically applied a weighting factor to each link: Category and Time. By further weighting with keywords--something Google has capability of already--Google can simply maintain a selection history for each unique user, and use it to sort by this weighting factor.

For example, if we were to look at my viewing history, one would find that I rarely view articles categorized as Sports and Entertainment, and mostly view articles categorized as Science and Politics. If there were a list of keywords attached to each article-link Google would have a measure of what kind of articles I mostly view.

Google can then sort the articles list to my probable liking--articles most likely of my interest at the top.

"The artist learns what to leave out."
   -- Ray Bradbury

Sometimes it pays to re-write. I finally do not feel apprehensive about releasing more code -- I have re-written, taking many things out, PHPPyu. I feel good about its quality and its usefulness.

(PHPPyu is a mini Web Portal written in PHP with Blog/BBS like features.)

1 Nov 2002 (updated 1 Nov 2002 at 15:15 UTC) »

Added BBS features to PHPPyu: couple of forums, mail to users, finger feature, etc. Kind of like the Waffle BBS (if you have ever seen it). New and different -- different from any other Web BBS I have seen.

People actually posted to the story. Cool.

Still no passwords for user accounts. I wonder if it will work.

22 Oct 2002 (updated 22 Oct 2002 at 13:39 UTC) »

Yet another PHPPyu release. It is almost at the Alpha stage! (PHPPyu is a PHP Web-Portico.)

I have created -- don't know if anything like it exists -- what I am calling Inter-net-active Fiction, a mystery story that anyone on the Internet can contribute to.

(Oh yeah, http://www.vsy.cape.com/~jennings/index.php3?arg=story/)

21 Oct 2002 (updated 21 Oct 2002 at 17:31 UTC) »
PHPPyu updated. Finally I got cookies working. Still in pre-release mode (i.e. still finding small bugs and areas to be improved).

Do you think a web-portal site, not too unlike Advogato, would work when there are no passwords?

25 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!