Older blog entries for yeupou (starting at number 155)

Booting over the network to install the system

Do you still have CD/DVD players installed on your boxes? Well, I mostly don’t; why would I anyway?

Actually, apart from system installation or access to the rescue mode of the system installation, there’s nothing you cannot do without and nothing is not best to do without (nothing is slower and noisier on  nowadays computers). But that’s not even really true anymore, now most mainboards include an ethernet card capable of network booting even if hidden behind confusing names like NVDIA Boot Agent for instance.

Usually, it supports the Preboot Execution Environment (PXE) which combines DHCP and TFTP. That’s nice because it’s then easy with GNU/Linux to ran DHCP and TFTP servers. So here comes my PXE setup, using ISC DHCPD and TFTPD-HPA, both shipped by Debian.

As described in the README, on the server (you have a home server, right? *plonk*), put this PXE directory somewhere clever, like /srv/pxe for instance (yes, that’s what I did; but you can put it in /opt/my/too/long/path/i/cannot/remember if you really really want).

Run the gnulinux/update.sh script to get kernels and initrds. By default, it fetches debian and ubuntu stuff. If it went well, you should have several *-linux and *-initrd.gz files in gnulinux/ plus a generated config file named default inside pxelinux.cfg/
You may add a symlink to this script inside /etc/cron.monthly so you keep stuff up-to-date.

Then, you must install a “Trivial FTP Daemon” on you local server which will, in the context of PXE (Preboot Execution Environment), serve these files you just got:

apt-get install tftpd-hpa
update-rc.d tftpd-hpa defaults

Edit /etc/default/tftpd-hpa, especially TFTP_DIRECTORY setting (you know, /opt/my/what/the/…).

Finally, you must update your DHCP Daemon so it advertises we’re running PXE (filename and next-server options). With ISC dhcpd, in /etc/dhcp/dhcpd.conf, for my subnet, I have now:

subnet 192.168.1.0 netmask 255.255.255.0 {
  range 192.168.1.100 192.168.1.200;

  # PXE / boot on lan
  filename "pxelinux.0";
  next-server 192.168.1.1;
}

Obviously, you wont forget to do:

invoke-rc.d isc-dhcp-server restart
invoke-rc.d tftpd-hpa start

That’s all. Now on your client, go in the BIOS, look for “boot on lan” and whatever crap it may be called (it varies greatly), activate it. Then boot. It’ll do some DHCP magic to find the path to the PXE and the menu should be printed on your screen at some point.

We can actually do plenty of things with this simple stuff. We could, for instance, use it to boot diskless terminals on a specifically designed distro.


Syndicated 2012-07-14 23:24:09 from # cd /scratch

Converting PDFs to multiple HTML pages with pdftk and pdftohtml

As already stated on this blog, Bada OS is total crap. Scripting is a mess, T9 is missing of original versions, updating is not an available option depending on your phone (even if the phone is less than a year old). It keeps being absolutely worthless when it comes to reading PDF. No matter how, even if you feed it a specifically cropped PDF with no margins, you’ll always end up with something not really readable, too big, too small, whatever. A pain in the ass.

I soon realized it’s best, with such an appalling combination of software and hardware, to convert ebooks/PDFs to HTML. And as the provided HTML reader can’t remember what page you last read (not surprising) and, ahem, is unable to load a 3 MB page (low memory it says: even if a 30 MB PDF can be loaded by the PDF reader with no issue on the exact same phone, go figure!), it needs splitted HTML.

PDF is usually an output format, not a source format. While there’s plenty to convert to PDF, fact is there is no complete suite to convert from. pdftk is powerful but not easy to handle IMHO and pdftohtml latest released is almost 10 years old. So I ended writing a small wrapper (pdf2htmls.pl) for both theses tools to convert one PDF to multiples HTML files with basic indexes. It takes –input=file.pdf and (optional) –output=directory arguments. Asides from Perl, it requires debian packages pdftk and poppler-utils.

The indexes are über-crude. They could be improved with chapters/titles, I’ll maybe add that later.


Syndicated 2012-06-21 12:57:03 from # cd /scratch

Having homemade aliases, functions and such available to every interactive shells,

Years ago, I remember RedHat already provided /etc/bashrc.d/ to add custom scripts to be sourced site-wide whenever bash was started. Debian still only provides /etc/profile.d for such scripts to be sourced site-wide. So, starting using Debian, I added stuff in this latest directory and made sure that /etc/bash.bashrc itself ran /etc/profile so it would be sourced in any cases.

There is actually a problem with that.

As defined (RTFM! `man bash`), /etc/profile is to be sourced for interactive login shells (`bash –login`) while /etc/bashrc or /etc/bash.bashrc is to be sourced for interactive non-login shells (`bash`). Having /etc/profile ran by /etc/bash.bashrc defeats the overal purpose of distinguishing the two of them. LFS ask for /etc/profile content to be a run-once thing, for logins, not something that should be started for any xterm.

But if you don’t, anything in /etc/profile.d will be ignored by most shells you’ll start on a X session, where you actually log in once and then starts numerous xterms. Ok, to put your aliases and local functions, you can edit /etc/bash.bashrc and use skels for ~/.bashrc, but that’s way less convenient than just copying a script into a directory.

To get something consistent, I added the /etc/bashrc.d directory. I think such directory should exists by default on Debian, even if I would agree if someone was to point out that this should not be BASH-specific.

Here’s an example of my /etc/bashrc.d and my /etc/profile.d. My local debian package postinst script add automatically the required following line in /etc/bash.bashrc:

[ -z "$ETC_BASHRC_SOURCED" ] && for i in /etc/bashrc.d/*.sh ; do if [ -r "$i" ]; then . $i; fi; done

Note that the same postinst script symlink /etc/profile.d/bash_completion.sh to /etc/bashrc.d/bash_completion.sh. The very existence of this file in /etc/profile.d IMHO show the extent of the broken default design. How come someone would actually want bash completion for login shells but not for interactive non-login shells? I would actually expect the contrary: as bash completion can be CPU time consuming, if it is to be skipped in only one case, it’s definitely on login shells! Why is it so? Probably because only /etc/profile.d exists.

(I’ve read also some people saying that /etc/bash.bashrc should be edited by hand. On any computers of a local network just to add a few local aliases? Ouch!)


Syndicated 2012-05-29 09:45:35 from # cd /scratch

For a change, today I won’t describe how I did something but how I did not.

I had I mind to use tumblr with a daily automated post of a picture. I devised it would be nice if a daily cronjob on my local server was updating a git directory and then post the first image in the queue.

First, I found out that tumblr refuses to handle mail sent directly by mutt and the local server smtp. So I then tried having mutt sending the mail using gmail authenticated smtp. I did not work either. But it works fine to any other recipient. And it works if directly sent with gmail web interface. We’ve made it incredibly easy to post from your desktop or mobile phone. Just send an email to the custom email address for the blog you’d like to publish to they claimed. Go figure, that probably something they implied by the confusing sentence Send posts directly to your mobile posting email address. You cannot email another email address and then forward the email from there. I understand spam is an issue they have to care about, but how comes that even gmail authentication isn’t proof enough of goodwill?

Anyway I end up with a non working script for the simple task of sending a mail.


Syndicated 2012-05-07 07:58:01 from # cd /scratch

Upgrading Dell Latitude C640 CPU

Just because it’s quite cheap, via ebay, I decided to upgrade my Dell Latitude C640 CPU from a 2.0 GHz P4-M (sl6fk) to a 2.4 GHz one (sl6vc). It could, in theory, get a 2.6 GHz one, but these (like sl6wz) are way more expensive.

The Dell Latitude C640 service manual describe in lenghty details how to actually change the CPUs. There isn’t much point in describing it here. Remove the hard drive, the keyboard, then the CPU thermal cooling assembly and you can easily access the CPU socket.

After that change, the BIOS complained about a “Processor Microde Update Failure – The revision of processor in the system is not supported.”, a non-blocker item. A quick check with dmidecode showed me the current bios were version A08 released 03/04/2003, actually a few month before first releases of the new processor. So I decided to upgrade the BIOS too, following these advices. I downloaded a windows BIOS update from dell website.  On a computer with wine available (not the case of my laptop),  I ran wine ./R71684.exe and stopped after it extracted all the files it contained, then I ran unshield x data1.cab to get the contents of this cabinet. I found a file BiosHeader/C640_A10.HDR that I copied on my laptop. On the laptop, with the package libsmbios-bin installed and the module dell_rbu loaded, I ran dellBiosUpdate -f ./C640_A10.HDR -u which returned:

Supported RBU type for this system: (MONOLITHIC)
Using RBU v2 driver. Initializing Driver. 
Setting RBU type in v2 driver to: MONOLITHIC
Prep driver for data load.
Writing RBU data (4096bytes/dot): .................................................................................................................................
Notify driver data is finished.
Activate CMOS bit to notify BIOS that update is ready on next boot.
Update staged sucessfully. BIOS update will occur on next reboot.

Then rebooted the laptop and it restarted mentioning it was now running BIOS version A10. Cpufreq works fine, everything is in order.


Syndicated 2012-04-26 22:05:16 from # cd /scratch

Cleaning up ogg/mp3 collection (tags, filenames) with lltag

Over years, my music collection started to get annoyingly inconsistent (file names, tags, etc). I wrote two scripts to clean it up, in the form maindir/MusicGenre/Band/Album/songs. The first one identifies albums from files, the second one does the actual job, as lltag wrapper. The point of doing it in two distinct scripts is to separate the part where user input is needed and the part that requires none but takes most CPU time.

Considering there’s an initial directory that contains a subdirectory for each music album that must be sorted out :

  • cleanup-music-directory-01-identify.pl writes a import file (containing style|band|year|album, only the year being optional) in each subdirectory, according to your input. You’ll notably have to select a music genre.
  • cleanup-music-directory-02-rename.pl reads import files and then uses lltag to do the actual job – renaming and updating tags. Best is to run in –debug mode first that will only show the proposed changes without altering anything yet; if some of your files lack the TITLE tag, it can get messy.

These two scripts must be edited first (paths to the collection and user supposedly to retain ownership of the files).


Syndicated 2012-04-18 09:32:45 from # cd /scratch

Using a laptop as alarm clock

My alarm clock died long ago. Since then, I use my cellphone to wake me up. Works ok, except that my current cellphone is total crap and, among numerous issues, its alarm software some morning just stays idle while, the others days when it actually works, a simple movement shuts it off. Believe me, I checked everything, made plenty of test, it’s just bad design and poorly coded software.

Not to mention that I usually wake up with no alarm; so when I use one, it means that I must wake up early, probably with not enough sleep at all. I need the real deal, high sound level and no shortcut to kill it, to actually get up.

Whenever I needed an alarm, I ended up running, on my laptop not to far from my bed, some `sleep XXh XXm && mplayer /path/to/a/song`, check sound volume, followed by CTRL-C in the morning.

Two days ago, I was über-tired, I needed to wake up early next morning and calculating tomorrow waking up time  – current time just pissed me off, not to mention checking the volume level, mute setting and such. It pissed me enough to decide me to write a script to fix the problem. Here comes wakey.pl:

  • it takes as argument the time you’d like to wake up in the form HH:MM or HHh MMm ;
  • it can run as timer (as sleep), useful if you want to take 20min nap, with -t or –timer ;
  • it wakes you up playing a random song picked in ~/.wakey ;
  • it uses mplayer to play the song, so it can be in any format your mplayer supports ;
  • it raises progressively the sound volume when trying to wake you up (you can set –volume-max, in case 100% on Master mixer is too loud) and reset properly mixer settings when finished ;
  • it won’t stop playing the music until you type a 3 to 5 characters word randomly taken from the defaut dictionary installed on your system (/usr/share/dict/words).

I wanted it to deal with any powersave setup to make sure to forbid the laptop to sleep or hibernate, but I found not portable and clean way to do it (my laptop uses KDE with PowerDevil). I’d be happy to hear about any clue/lead in the regard.

# (this assume wakey.pl is executable and in $PATH)
# wakes you up next time its 6 in the morning:
wakey.pl 06:00

# the same
wakey.pl 6h

# wakes you up exactly in 15 minutes
wakey.pl -t  :15

# the same
wakey.l 15m --timer

# the same but make sure the sound volume wont exceed 70%
wakey.l 15m -t -v 70

To run it, make sure you have debian packages libfile-homedir-perl and libterm-readkey-perl installed. You’ll also need mplayer and amixer properly set up.


Syndicated 2012-02-22 10:32:21 from # cd /scratch

Moving a live system from one hard disk to another

Ever found yourself in the situation where you want to move your GNU/Linux from an old hard disk to a new one. Well, it can be done quite easily :-)

First, set up new partitions with parted then mkswap, mkfs (using proper labels). Yes, I assume you’re familiar with these (RTFM).

Mount the new root partition somewhere, like /mnt/tmp in this article.

Create in this new partition all the directories that it would not make sense to copy from the original system (in my case: home being on another partition, stockage containing only NFS mounts):

cd /mnt/tmp
mkdir dev  home  proc  stockage  sys  tmp mnt

Shut down any daemon/service that is up (cron, etc), to avoid copying stuff in an incoherent state.

Then, actually copy the system:

for dir in /*; do if [ ! -e /mnt/tmp/$dir ]; then cp -ax $dir /mnt/tmp/; fi ; done

Edit /mnt/tmp/etc/fstab to use the newly created partitions.

Chroot in the new system to make it bootable with grub:

mount --bind /dev /mnt/tmp/dev
mount --bind /sys /mnt/tmp/sys
mount proc -t proc /mnt/tmp/proc
chroot /mnt/tmp
grub-mkdevicemap
update-grub
# (you can run blkid to check the root's unique id of this
# new system shows up in the new system /boot/grub/grub.cfg)
grub-install /dev/XX  # where XX is the new disk, like /dev/sdc or whatever

Reboot on the new system (stating the obvious: change boot drive order in the BIOS). If everything is fine, then copy /home from the old disk to the new partition, without login in with any system (CTRL-ALT-F2 to quite X server and log in as root, for example).

After removing the old device, re-run update-grub so it’ll no longer show up. The end.


Syndicated 2012-02-20 15:35:24 from # cd /scratch

Getting accurate temperature reading for the CPU

On my main workstation, lm-sensors provides apparently contradictory temperature reading for the CPU, depending on the sensor:

radeon-pci-0200
Adapter: PCI adapter
GPU Temperature:  +62.0°C  

k10temp-pci-00c3
Adapter: PCI adapter
CPU Temperature:  +17.0°C  (high = +70.0°C)
                           (crit = +70.0°C, hyst = +68.0°C)

atk0110-acpi-0
Adapter: ACPI interface
[...]
CPU FAN Speed:          1890 RPM  (min =    0 RPM)
[...]
CPU Temperature:         +32.0°C  (high = +90.0°C, crit = +125.0°C)
MB Temperature:          +42.0°C  (high = +45.0°C, crit = +90.0°C)

17°C, as reported by the CPU sensor, seems very low especially as the temperature of the room the computer is running inside is at least 17°C already. Clearly, the Motherboard sensor (atk0110 / IT8716F chip) readings, same as what the BIOS reports, are more sensible.

There’s actually lot of misinformation on the web. For instance, CoreTemp author, a proprietary software for MS Windows to provide CPU temperature readings, states on his front page that “all major processor manufacturers have implemented a DTS (Digital Thermal Sensor) in their products. The DTS provides more accurate and higher resolution temperature readings than conventional onboard thermal sensors”. Possibly, probably, right: k10temp may be more accurate than atk0110. However, when the same author reply, on his forum, to a user asking about inconsistencies in CPU temperature readings, clearly interested in real and not relative temperature (he wrote :”I’m running water cooling and the temps aren’t high during load but just wondering about the accuracy”, high temperature is meaningless on an undefined relative scale), that “I’d say that Core Temp is more accurate, especially at higher temperatures. The ASUS programs sensors are based on the motherboard and depend on an external chip. The sensors Core Temp reads are located in the CPU itself and the values are read directly from the CPU registers.”, he clearly shows misunderstanding of what superior accuracy CPU sensors really mean.

As documented by AMD mentioned in the k10temp linux module doc, “[k10temp] is the processor temperature control value, used by the platform to  control cooling systems, [...] is a non-physical temperature on an  arbitrary scale measured in degrees, [...] does not represent an actual  physical temperature like die or case temperature. Instead, it specifies  the processor temperature relative to the point at which the system must  supply the maximum cooling for the processor’s specified maximum case  temperature and maximum thermal power dissipation”.

I was about to publish this article without paying attention to Intel sensors, but a quick search lead to me even worse: a comment about Core Temp in a doc titled CPU Monitoring with DTS/PECI stating : “These tools provide a convenient way to see the temperature variation reported by the sensor [...] There are several issues with these tools. First the assumed value for Tj may not be correct and thus impact the accuracy of actual temperature reporting. Secondly the DTS is only accurate when in the adjacency of Tj. Not knowing the intention and effective range of DTS, the tools try to compensate with the inaccuracy of low temperature reading, which may not be a correct interpretation.”

However accurate they may be, relative readings provided by CoreTemp for AMD K10 are almost meaningless to an end user (while great for the system for fancontrol and such), likely expecting to be able to compare them to other (motherboard|hard disk|etc) readings. In my case, surely k10temp mean something (17°C is low) but it makes no sense to compare it to the room (20°C), GPU (62°C), PATA Hard Disk (39°C), Motherboard (42°) or any other else temperature. In short, except if you know exactly what you’re doing, use Motherboard sensors and if you’re looking for an alternative to CoreTemp, try Open Hardware Monitor.


Syndicated 2012-02-18 15:28:31 from # cd /scratch

RSS feeds: new layout for rawdog

Almost two years ago, I posted an article describing how I use rawdog, a minimalist RSS aggregator to get, on my webserver, an HTML output of my Akregator aggregated feeds. Since then, I changed the layout:

  • articles are no longer shown in four columns,
  • articles descriptions are provided directly on the page and no longer on mouse over the title,
  • there are now several indexes pages, one per day (as many necessary to reach the article limit, set to 950) using plugin dated-output.

I won’t re-describe the whole setup, the relevant files to set up this new rawdog layout are here. On my webserver, it goes in /home/rawdog, using the user rawdog (group www-data). Obviously crontab is actually /etc/cron.d/rawdog and should be edited to refer to proper local users.

I won’t harm, by the way, even if by default unnecessary (could prove useful if, by any chance, your server is configured to interpret perl .pl ou python .py files), to restrict access to rawdogs subdirectories that contains scripts, for instance by adding, for nginx, such statements in the server config:

    location /rss/scripts { deny  all; }
    location /rss/plugins { deny  all; }

Syndicated 2012-02-03 08:27:58 from # cd /scratch

146 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!