Inspiring Japanese Culture
My small personal experience with Japanese culture is also favourable. Their respect for their elders takes things to a new level that just seems completely out of fashion in the west. I found that there was a lot to learn from the simple example of just one Japanese student's life.
Your Don Knuth comment made me chuckle. It's true; it would be hard not to love him. Maybe if more Christians lived like that, we wouldn't be "taking the Lord's name in vain" and giving God a bad rep in the world.
Crackpots and Disasters
If God exists, who do you think He would be more angry at: a person who makes over a million dollars a year, who may even claim to be a Christian, and still has most of it in his bank account at the end of the year; or a poor Haitian who practices voodoo?
I think this is a difference that a lot of us "religious wackos" miss. If we are somehow blessed with plenty in the western world, that is not evidence that God is rewarding us for something good we've done. Rather, it is proof that we are doing a piss-poor job at the responsibility God has given us to take care of the poor. And if any of these Christians read the Bible, they should take note of the parts that indicate that rich people who don't share are actually storing up judgment for themselves in the very wealth they take comfort in. Their riches are their judgment. That cool thousand, million, or billion in the bank will be a liability at the last day.
Don't assume someone is a Christian just because they claim to be. And by that same token, don't assume that God is pleased with Christians just because they claim He is.
If I understand the Bible correctly, North America could easily be more hell-worthy than Japan or Haiti, in the same way that Jerusalem in Jesus's day was more hell-worthy than Sodom and Gomorrah.
This may be a clue as to how to cleanup the recentlog. Perhaps an automatic email whenever your nick is mentioned in a blog post. Or a <call_you_out> tag, which requires a response within a month before you get pruned from the default recentlog threshold. :-)
Repairing the GoFlex Home NAS Drive
Each time the device powers off and restarts, it sets itself to use DHCP.
The device runs Linux under the hood, and even exposes the SSH port, but the accounts it uses are not obvious. And root is disabled.
To fix the device yourself, you will first need root access. The easiest way is described in this wiki page. Basically:
ssh USERNAME_hipserv2_seagateplug_XXXX-XXXX-XXXX-XXXX@DEVICE_IP_ADDRESS sudo -E -s
The USERNAME is the name of the account you have created on the device, when using the usual Windows interface. The XXXX-XXXX-XXXX-XXXX is the PK Product Key located on the bottom of the device.
First, make it easier to gain root access by enabling root SSH logins:
Edit /etc/ssh/sshd_config and uncomment: PermitRootLogin yes Set your own root password: passwd root Restart SSH: service sshd restart
Next, there is a bug in the logic of the /etc/init.d/oe-bootinit script's ensure_firstbootnetwork() function. The entire function should probably only run if the /etc/firstboot file exists, but there is an if/else statement that only checks this for the first half of the if statement. The else is where the DHCP overwrite happens.
By this point, though, your device has passed the first boot, so you don't need this code anymore. I commented out the entire block of code, and replaced it with something innocuous, just to avoid any shell issues with empty functions:
echo do nothing > /dev/null
You'll want to be careful editing these files, since if you make a typo that prevents network boot, you won't be able to login to fix it.
It seems that Seagate has known about this issue since September, but there were no software updates that fix this issue. Using the web interface to check for updates, it claimed that the device was up to date. You may want to take this into consideration when making future purchasing decisions.
Good luck, and enjoy!
IPv6 and Mozilla
Under about:config, set network.dns.disableIPv6 to true.
Rant: Why, Oh Why, Linux??
I've been experimenting with Debian Squeeze on an older T40 Thinkpad, and I've been noticing some disturbing trends. Trends that I probably haven't noticed since I haven't had hardware that new, and have been fairly happy using xterms and make on Debian Lenny.
I'm noticing a divergence between the command line and the GUI... a divergence that is horrifying. I installed the base system, got WiFi going using /etc/network/interfaces, and was happy. Then I installed the GUI, and my carefully configured network stopped working. The GUI had its own way to configure things. Yes, the GUI worked, if it was allowed to operate by itself, but that's not the point. Why have two? Why break one to make the other work? I still don't know where the GUI stores its network settings. They sure don't show up in the /etc config files.
Divergence #2: DNS works snappy from the command line, but somehow Firefox labours to find Google, or Slashdot, even though it was just there a few minutes ago. Really? This is 2010. I poke around the network with tcpdump to see what on God's green earth Firefox is doing. And it's looking up DNS entries for a tab I'm not even on.
Look, I know IPv6 is the next big thing, but don't turn it on unless it has zero impact on what people need right now. Let me easily choose which network has priority. And give me one place to set it. Not two. Not five. One.
So I say goodbye to Firefox and load Mozilla.. the ancient browser... it's worked for me in the past. Maybe it will be too old to fail. I load my home page, and watch the network with tcpdump. I see it going through the page, looking up DNS for all the links. Great, I think, at least it will be cached for me. I click on some links. A little better at first, but soon goes downhill. Still more forgetful than an Alzheimer's patient.
Why is it so hard to get networking working in 2010? My friend went through this with Ubuntu back in 2007, and it didn't work right. At least now it sort of hobbles along, but back then, NetworkManager only managed to dig itself a grave.
Enough about networks: my laptop is fairly old, and the battery is getting flakey. This is to be expected, I don't mind, the battery is still useful for an hour or two, and I'm happy. So I load up Gnome and work away on the battery. The icon shows me the battery discharge progress. Then it gets to the point where the battery is low. It pops up a gigantic black notification message, over top of other windows, telling me this. Fine. Thanks, I guess. Go away now. Then it gets critically low, and shows me another one. Ok ok, I get it, calm down. Then it alternates back and forth between low and critical. I even saw two notification messages on the screen at once... hiding other windows that I needed! And there is no way to turn these off! Unbelievable. Do I have to hack the code to get rid of these monstrosities? Why not just warn me once and let me deal with the consequences? Why pester me and assume I have no brain? I know the battery is low. I know that means I have 30 minutes of power left. Stop telling me what I already know!
Does every poor end user need to download some Gnome C code, and hack around in the code to figure out how to tune the system? I've already had to do it once to figure out how to disable automatic Suspend mode, when the GUI helpfully left that option off the menu. Now again for notifications? Good grief on a stick!
Poor Linux. Everybody running around the system, coding their own little kingdoms in their own little sandboxes, and the distros are including it as if it was something stable for end users. The kernel guys do one thing, the udev guys do another, the hotplugging notification guys do something else, the GUI guys do yet more (in multiple different ways, of course), and the user tries to push a rope uphill, suffering with a very busy but confused system crippled by lack of configuration options in the menus.
Dear programmers: the first setting you should code in your next fancy new feature is how to turn the bloody thing off.
Video: Intro to Git
Andrew Berry filmed the talk, and we recorded the laptop screen for the slides and command line activity. He put it all together in a video which you can download at archive.org.
The lights were off so people could see the screen, so you won't see much of me, but the slides are there.
If you want to grab the updated slides and scripts for yourself, you can download them via git with the following command:
git clone http://foursquare.net/intro_to/.git
Thanks to Khalid Baheyeldin, Andrew Berry, and Bob Jonkman for their help and equipment, and thanks to the Drupal group for their welcome.
KWLUG: Introduction to Git
You can grab my OpenOffice slides, as well as the demo scripts I used during the presentation, by using git:
git clone http://foursquare.net/intro_to/.git
If you missed tonight's talk, I'm also booked to give it at the next Drupal User Group meeting on Thursday, July 15, 2010, at 7pm. You can find more details here. That meeting is at 58 Queen St. in Kitchener (across the street from the old KWLUG meeting place).
Special thanks to Paul Nijjar for providing the laptop tonight and the setup.
That is a pretty comprehensive list. I'm probably in the same camp as you claim to be, having broken every one.
The rules are pretty strong as well. I'm still mulling them over in my mind, but there is one that sticks out as unwise, or poorly written. It is #3, which says: "I will not write a program that fails to do tomorrow what it was able to do yesterday."
Taken purely literally, this prevents all change, all experimentation, all feature-level refactoring. I'm sure this is not what the rule was intended to say. I'm assuming it is more of an encouragement for better testing, so that a bugfix in one area doesn't break a feature in another.
Even taking this rule as a Testing Rule, I think it overreaches. There are steps that programmers can take to prevent regressions, but to assume that it is possible, or even common, to prevent them all is wishful thinking.
I think a better rule would be something like: "I will not refuse or hinder the fixing of a bug that I have caused."
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!