Older blog entries for prla (starting at number 134)

Being on the verge of getting employed by a new tech company seemed like the right time to resume writing here.

I'm still working out details with my future employer, and it will probably be official today, if all goes to plan. I have already been assigned a task, in order to test my capabilities as a developer, which I have been doing for the past week. This, by the way, was a jQuery-based date picker to narrow the presentation of some values over time.

In order to help me get acquainted with the codebase and getting my bearings, the company's lead developer has been of invaluable help for these past few days. It's always a great experience when you're able to learn from someone you respect and who proves to be very knowledgeable about the problems at hand.

What I'm NOT knowledgeable about, and which has honestly been a bit discomforting, is being a Git fool. Reading The Git Reference is something I'm about to do.

More news about this step of my life as I get them...

Oh look, after so long, Advogato is still here. Maybe I start jotting down some notes again, why not?

5 Jul 2008 (updated 5 Jul 2008 at 00:07 UTC) »

It's been an interesting evening. Back at my parents home tonight (will leave back to Evora tomorrow), I've been trying to get the information system framework to run in the new Mac Mini that currently lives in my bedroom. So this entry goes some way towards documenting this evening's trip.

Leopard ships with a fully functional Apache 2.2 copy and getting PHP5 to play along with it is a simple matter of uncommenting one line in httpd.conf. Installing PostgreSQL is a breeze using Marc Liyanage's PostgreSQL package, not forgetting to set the cluster creation encoding to Latin1. This is because everything in the information system is Latin1 and it saves a lot of headaches.

The trouble began when I noticed that Leopard didn't really ship with PostgreSQL bindings in its PHP5 installation. So basically there was no choice other than recompiling PHP from scratch. I tried Marc's PHP5 package which includes PostgreSQL support but alas it all went well until the installation process bombed out in the end with a cryptic error.

So, off to compiling PHP's source which had me searching for the Leopard DVDs so I could install XCode's tools, namely gcc. Once that was done, compiling PHP was a breeze. Problem was that once it got installed, Apache complained that the PHP module had the wrong architecture. One minute of Googling told me that Leopard's Apache comes pre-configured for all 4 archs and so I need to do that for whatever I install that interfaces with it. This is a prospect that clearly sucked.

Miraculously, someone came up with a much better and hassle free choice: stripping the httpd binary of the surplus architectures and leaving 32-bit only. Here's the magic sauce:


$ cd /usr/sbin
$ sudo cp httpd httpd-fat
$ sudo lipo httpd -thin i386 -output httpd

Works like a charm.

Et voila'. It's up and running!

Now I'm spent, I better crawl to bed.

3 Jul 2008 (updated 3 Jul 2008 at 22:07 UTC) »

Adapting to development under CakePHP and the university's information system architecture has been slow but steady and really picked up today. Now I see that whatever I developed in the past under MVC frameworks has really been scarce. It obviously helped to understand the foundations of what models, views and controllers are but I guess I still hadn't grasped what they really are. That, alas (or not), only comes with extensive exposure to somewhat complex system that use them.

In any case, it's been a really interesting trip so far and the best side effect has been learning a lot of simple but neat Emacs tricks with Gonçalo, my supervisor on this particular project. Another important thing is that I'll probably be developing another information system, with different subject matter entirely, and the knowledge I've been acquiring will surely prove invaluable later on. Today has been somewhat of a breakthrough, as I've been implementing from scratch a lot of functionality which, despite simple at the core, were nothing but a major headache less than a week ago.

And when it comes to database design, I may not come up with the best relational designs in the world but I surely understand them much more clearly. Proof is how different (and may I add worse) a schema for a side project of mine was before I got to work on this stuff here and now that I learned a couple of things.

Oh and I've been carrying the Macbook along to work again. I simply cannot live without this baby and I guess using a shitty keyboard on the desktop also prevents me from really feeling comfortable. Other than that I just miss the comfort I find in Mac OS, regardless of my everlasting love for Linux, which I've used for over a decade now.

It's also been two months since I started working and the truth is that I've done little else on the side. Football Manager 2008 Portuguese translation has kicked off and there's a web app I'd like to take a stab on but both are on the backburner until I get back on my feet, so to speak. The translation, however, I need to start as soon as possible.

More to come. Interesting, albeit difficult tiresome and sometimes nerve-wrecking, times.

Database engineering has always been an hassle for me and now I have to deal with quite a bit of it. Now I kinda like it and have been learning a lot. Helps to work directly below someone who's proficient and oozes experience. In the process I've also been picking up a lot of emacs tricks which are a huge help for productivity. This, in fact, is a direct result of leaving my Macbook at home now and using the desktop that's been assigned to me at work.

model name : Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz

So, what I'm developing at work is an information system to manage the performance evaluation of public administration workers and their superiors. In Portugal, this is called SIADAP. I need to deliver the first part of the system, up and running, by the end of next week and not being entirely too productive using CakePHP yet is a bit of a problem.

In the meantime, my back is killing me again. I always predicted I'd have back problems but not when I'm bloody 24 going on 25. I'm hoping I won't need to pay a visit to the osteopath this time around, but it all depends on how I feel later today.

Forgot the damn Pattern Recognition book back in my parents' home. Meaning I'll have to start reading something else for the next couple of weeks. Strongest candidates are "Quantico" by Greg Bear and "Life of Pi" by Yann Martel. I think Quantico will win by a nose, for now.

Hitting a wall at work, I can't seem to get the information system codebase checkout to properly run in my development machine's Apache. Something's up either with the Apache config or the CakePHP config itself. Either way, it's worrying me because I need to get up to speed as soon as possible and here I am wasting time not able to get things even running, let alone write some code.

More on this later...

Hard to believe it's been this long.

Doing the lazy lazy thing for the whole weekend, not giving a damn about any work. Finally finished "Neuromancer" which was both interesting and confusing in places. I guess reading about technology from 1985 with over 20 years of real world hindisight on that same technology explains my confusion. SF authors are right a lot of the time, but not always. Nevertheless, I feel better having finally read it and it amazes me how much "The Matrix" actually resembles this. Now I'm reading "Pattern Recognition" by the very same William Gibson and enjoying it quite a lot more, about 100 pages into it.

Work has picked up and writing information systems for important things in CakePHP is a mystery that slowly unfolds. I better get proficient writing web apps with this framework and that right soon.

Yesterday, made a detour in Lisbon to get D. to the bus station so she could get home for the weekend and decided to go to Colombo's FNAC while I was at it in order to buy Porcupine Tree tickets for the October gig in Almada. While doing so, couldn't resist the fresh money in my wallet, so to speak, and got myself a couple of treats: Portishead's "Third" and Black Mountain "In The Future". Both are sublime and will surely feature in my Top 10 come the end of the year. Unless the second half of the year is absolutely crazy in terms of sheer quality.

But, alas or not, the weekend is coming to an end and I need to pack, shower, dine and get my ass moving back to Evora. Work resumes tomorrow at 9am and I never thought I'd be happy to have a 9-5 job, but I do. I needed the stability for a while.

10 Nov 2006 (updated 10 Nov 2006 at 08:53 UTC) »
Adventures in LDAP land

Until recently, I honestly had no idea what LDAP was all about. My work has now led to me research it a bit and implement a small sized solution for the research centre. I still have no idea what LDAP is all about, but here’s some scribblings I’ve gathered on the matter while we’re at it. Getting LDAP to work on Linux with the OpenLDAP tools is largely a matter of figuring out the right schemas, filling the database, and pointing things at it. But why LDAP? When administering a network of more than trivial size, it soon becomes a pain to create and maintain user accounts. An LDAP server can be used to provide a central point of control for Unix and Samba accounts, as well as email and web server authentication. There’s always more to it than meets the eye, but in this particular instance what we want here is to have a set of workstation machines in a private subnet behind a router - which incidentally acts as the LDAP server as well - having central authentication. Basically, all user login information is stored in the server, leaving only local root (and services) accounts in each machine for administration purposes. Moreover, we want each user home directory to be remotely mounted in an external file server (the HP MSA1000 storage array I’ve been blabbering about) via NFS. This last part will be covered in a forthcoming post. Onwards to the configuration… setting up LDAP involves configuring both the server and how many clients we want using LDAP authentication. In this case, we’re working off a Debian system, configuration filenames can and will vary across different distributions. (The following is, again, in a personal notes style, if you come across this and need any further explanation, feel free to email me and I’ll try my best to help). SERVER SIDE

# apt-get install slapd ldap-utils

Configuration of these, depending on your setup and environment, should be something along these lines:

Omit OpenLDAP server configuration? no
DNS domain name: ldap.example.org
Name of your organization: example_organization
Admin password: <administrative LDAP password>
Database backend to use: BDB
Do you want your database to be removed when slapd is purged? no
Allow LDAPv2 protocol? no

Now is probably a good time to setup some basic organizational/user/group information. This can be done either from scratch, perhaps using some app to manage LDAP, or using a basic set of LDIF (LDAP Data Interchange Files) files. See http://www.moduli.net/pages/sarge-ldap-auth-howto under “Set Up Base Information and Test User and Group” for more on this. One nitpick, also covered in the aforementioned guide, is allowing users to change their own details, including password, as is usually possible when the accounts are stored locally. This can be achieved by editing /etc/ldap/slapd.conf and adding:

access to attrs=loginShell,shadowLastChange,gecos
by dn="cn=admin,dc=ldap,dc=example,dc=org" write
by self write
by * read

CLIENT SIDE

# apt-get install ldap-utils libpam-ldap libnss-ldap nscd

LDAP Server host: 1.2.3.4
The distinguished name of the search base: dc=ldap,dc=example,dc=org
LDAP version to use: 3
Database requires login? no
Make configuration readable/writeable by owner only? yes

The distinguished name of the search base: dc=ldap,dc=example,dc=org
Make local root Database admin: yes
Database requires logging in: no
Root login account: cn=admin,dc=ldap,dc=example,dc=org
Root login password: <enter LDAP admin password here>
Local crypt to use when changing passwords: md5

In /etc/nsswitch.conf:

passwd: ldap files
group: ldap files
shadow: ldap files

In /etc/ldap/ldap.conf:

BASE dc=ldap,dc=example,dc=org
URI ldap://1.2.3.4 # your ldap server IP here

Followed by /etc/init.d/nscd restart. PAM

# apt-get install libpam-passwdqc

Debian has a series of files in /etc/pam.d appended by common- at the beginning of their names, which are included by the other files in that directory for specific services. We can tell PAM to use LDAP for all of these services by modifying these common files. In /etc/pam.d/common-password, comment out and replace:

password required pam_unix.so nullok obscure min=4 max=8 md5

or:

password required pam_cracklib.so retry=3 minlen=6 difok=3
password required pam_unix.so use_authtok nullok md5

with:

# try password files first, then ldap. enforce use of very strong passwords.
password required pam_passwdqc.so min=disabled,16,12,8,6 max=256
password sufficient pam_unix.so use_authtok md5
password sufficient pam_ldap.so use_first_pass use_authtok md5
password required pam_deny.so

Read the pam_passwdqc man page for more about parameters you can give to it. In /etc/pam.d/common-auth comment:

auth required pam_unix.so nullok_secure

replace with:

# try password file first, then ldap
auth sufficient pam_unix.so
auth sufficient pam_ldap.so use_first_pass
auth required pam_deny.so

/ In /etc/pam.d/common-account comment:

account required pam_unix.so

replace with:

# try password file first, then ldap
account sufficient pam_unix.so
account sufficient pam_ldap.so
account required pam_deny.so

And don’t forget to edit /etc/libnss-ldap.conf (which, by the way, on other systems is called /etc/ldap.conf) ! That would have saved me an entire afternoon… REFERENCES

#

16 Oct 2006 (updated 16 Oct 2006 at 15:28 UTC) »
HP MSA1000 Storage Under Linux

These are notes on some experiments setting up hardware RAID on the MSA1000 and accessing the storage space under Linux. This MSA1000 holds five 146,8GB hard drives. We’ll attempt to configure a LUN with a RAID5 disk set comprised of four drives plus a spare. Detailed information on RAID level 5 can be found at: http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks#RAID_5 At first, no units are configured on the MSA1000. Accessing the CLI as outlined in a previous post, we can take a look at our disk set:

CLI> show disks
Disk List: (box,bay) (bus,ID)     Size     Units
 Disk101     (1,01)    (0,00)    146.8GB    none
 Disk102     (1,02)    (0,01)    146.8GB    none
 Disk103     (1,03)    (0,02)    146.8GB    none
 Disk104     (1,04)    (0,03)    146.8GB    none
 Disk105     (1,05)    (0,04)    146.8GB    none

Using the add unit command, we create the aforementioned unit using all four disks plus a spare:

CLI> ADD UNIT 0 DATA="Disk101-Disk104" SPARE="Disk105" RAID_LEVEL=5

Now we have a unit:

CLI> show units

Unit 0:
In PDLA mode, Unit 0 is Lun 1; In VSA mode, Unit 0 is Lun 0.
Unit Identifier   : 
Device Identifier : 600805F3-001828E0-00000000-68460002
Cache Status      : Enabled
Max Boot Partition: Enabled
Volume Status     : VOLUME OK
Parity Init Status: 10% complete
4 Data Disk(s) used by lun 0:
   Disk101: Box 1, Bay 01, (SCSI bus 0, SCSI id  0)
   Disk102: Box 1, Bay 02, (SCSI bus 0, SCSI id  1)
   Disk103: Box 1, Bay 03, (SCSI bus 0, SCSI id  2)
   Disk104: Box 1, Bay 04, (SCSI bus 0, SCSI id  3)
Spare Disk(s) used by lun 0:
   Disk105: Box 1, Bay 05, (SCSI bus 0, SCSI id  4)
Logical Volume Raid Level: DISTRIBUTED PARITY FAULT TOLERANCE (Raid 5)
                           stripe_size=16kB
Logical Volume Capacity : 420,035MB

When initially powered on, the MSA1000 will detect host connections and assign them the default profile of DEFAULT. This profile must be changed to Linux using the ADD CONNECTION command:

CLI> ADD CONNECTION RX1600-1 WWPN=210000E0-8B004E53 PROFILE=LINUX

If all works out well, upon reboot the Linux hosts connected to the MSA1000 will then see the disk array as a single /dev/sda device, just like a regular SCSI disk. This device can then be partitioned or otherwise mangled at will. In our case, we’ll be deploying a Linux LVM solution on top of it, probably with using different filesystems for different logical volumes.

#

16 Oct 2006 (updated 8 Nov 2006 at 14:52 UTC) »
Exploring Linux LVM: Part 1

Part of the challenge I’ve outlined in the previous post is figuring out how to share the MSA1000 disk array between the two servers. Once that’s figured out - and part of it was solved by activating the fibre channel driver in the kernel - the idea is to use the Linux LVM (Logical Volume Manager) to manage the actual available storage space on top of the MSA1000 hardware RAID. Personal notes and scribblings on the matter follow. The Linux Logical Volume Manager Logical Volume Management provides benefits in the areas of disk management and scalability. It is not intended to provide fault-tolerance or extraordinary performance. For this reason, it is often run in conjunction with RAID, which can provide both of these. Logical volume management provides a higher-level view of the disk storage on a computer system than the traditional view of disks and partitions. This gives the system administrator much more flexibility in allocating storage to applications and users. User groups can be allocated to volume groups and logical volumes and these can be grown as required. It is possible for the system administrator to “hold back” disk storage until it is required. It can then be added to the volume(user) group that has the most pressing need. When new drives are added to the system, it is no longer necessary to move users files around to make the best use of the new storage; simply add the new disk into an existing volume group or groups and extend the logical volumes as necessary. In this particular situation the idea is to use the MSA1000 hardware RAID for fault-tolerance and reliability and doing Linux LVM on top of it for creating flexible volumes.

A sample LVM topology Some usual LVM tasks for managing disk space: Initializing a disk or disk partition:

# pvcreate /dev/hda 			(for a disk)
# pvcreate /dev/hda1			(for a partition)

Creating a volume group:

# vgcreate my_volume_group /dev/hda1 /dev/hdb1

This would create a volume group comprising both hda1 and hdb1 partitions. Activating a volume group:

# vgchange -a y my_volume_group

This is needed after rebooting the system or running vgchange -a n Removing a volume group:

# vgchange -a n my_volume_group		(deactivate)
# vgremove my_volume_group			(remove)

Adding physical volumes to a volume group:

# vgextend my_volume_group /dev/hdc1
                                    ^^^^^^^^^ new physical volume

Removing physical volumes from a volume group:

# vgreduce my_volume_group /dev/hda1

The volume to remove shouldn’t be in use by any logical volume. Check this by using the pvdisplay <device> command. Creating a logical volume:

# lvcreate -l1500 -ntestlv testvg

This creates a new 1500MB linear LV and its block device special /dev/testvg/testlv

lvcreate -L 1500 -ntestlv testvg /dev/sdg

The same but in this case specifying the physical volume in the volume group

# lvcreate -i2 -I4 -l100 -nanothertestlv testvg

This creates a 100 LE large logical volume with 2 stripes and stripe size 4 KB. Removing a volume group: The logical volume must be closed before it can be removed:

# umount /dev/myvg/homevol
# lvremove /dev/myvg/homevol

Extending and Reducing a logical volume: Detailed instructions on how to accomplish this for different underlying filesystems can be found here: http://tldp.org/HOWTO/LVM-HOWTO/extendlv.html http://tldp.org/HOWTO/LVM-HOWTO/reducelv.html In a “normal” production system it is recommended that only one PV exists on a single real disk. Reasons for this are outlined at: http://tldp.org/HOWTO/LVM-HOWTO/multpartitions.html Some useful external LVM resources: http://tldp.org/HOWTO/LVM-HOWTO/ http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html http://www.gweep.net/~sfoskett/linux/lvmlinux.html

#

125 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!