Older blog entries for amits (starting at number 52)

FUDCon India Planning: Weekly Meetings

If you want to keep track of how FUDCon India planning is shaping up, or want to lend a helping hand in deciding how it shapes up, there are two weekly meetings you should be aware of:

  1. The IRC meeting on #fudcon-planning on irc.freenode.net every Friday at 1300 UTC / 1830 IST.
  2. The weekly face-to-face meeting at the Red Hat office in Pune, India.  This happens every Tuesday at 1500 IST.  If you can't attend these and if some other day works for you, drop us an email at the India list.
The preparations for the FUDCon are well under way, watch the India list for minute-by-minute details.
This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-07-18 04:57:00 (Updated 2011-07-18 04:57:01) from Amit Shah

First FUDCon India meeting

A few of us at the Red Hat Pune office strolled into a conf room to discuss plans and fix responsibilities for organising the FUDCon India 2011 at Pune. It was an impromptu session; we couldn't include non-RH organisers but we kept the details to what the people present in the room have already committed to, according to the list of organizers.

We agreed on a set of action items.  There are no dates attached yet; those should be discussed and decided in the next meeting.  Some clarity needs to come from others; the first planning meeting on IRC scheduled for the 15th of July should shed light over those matters.

---

In addition to all that's listed below, we might get extra sponsorship money (in addition to the FUDCon budget) from some companies.

If there's some budget surplus, who'd like a fully-sponsored elephant ride through the city?  (Or at least to the event venue?)  Be quick to nominate yourself!

---

  • All: blog about the activities you're doing. Ensure your blog is aggregated on planet.fedoraproject.org.
    • (Rahul to contact other organisers to do this) 
    • Done: Rahul sent an email to the India list.

  • Banners - Suchakra taking care of this
  • Booklet - Ankur Sinha with others.
    • Rahul to take part in this too

  • 1st priority: get international speakers to book flights ASAP. Get them to submit 'sponsorship needed' tickets.
  • 2nd priority: contact HR, get invitation letters for foreign delegates
  • 3rd priority: book hotels

All these 3 depend on:
  1. who's coming?
  2. who are we sponsoring for flights/hotels (visas are self-sponsored)
  3. other guidelines -- should get clarified this friday on irc meet

  • People not listed on talks page but will come:
    • Members from the Red Hat Community Architecture team
    • They will come from their own budget. Confirm if stay is also from their budget.
    • Get them and their talks listed on Wiki page.

  • T-shirt design: Rahul to contact design team
    • Done: Rahul opened a ticket
  • Videos: Ramki to contact pycon people who have offered to videograph + host videos

  • Swag: Can come from Ambassadors budget, but we could put our money if we have enough sponsorships.
  • FUDPub: Kushal to contact pubs.

  • Lunch: Should we sponsor till a cut-off? All? Only speakers + outstation delegates?
    • Only sponsor speakers + outstation delegates, have for-pay counters for everyone else. Rahul says he hasn't seen a conf where food is free.
    • If we get lunch out of the budget, we can do better things for fudpub (starters in addition to one round of drinks) and swag (for more people)

  • Website + online voting (for barcamp): Saleem
    • Rahul to send initial mail about website to fedora advisory board.
    • Done: Rahul sent initial mail.


  • Hotels + logistics: Satya + Murty
    • Get quotes
    • Book as soon as we know number of int'l / non-Pune delegates who we are sponsoring
    • We could book for people who are staying on their own budget, eg., self-sponsored delegates / speakers. 


  • Food/catering: PJP
    • multiple options
      • boxed set
      • stall
    • explore both for two days. 3rd day will be paid for by individuals (or adjusted if budget permits).
    • Check with COEP if they can keep canteens open for all three days


  • To check with COEP - Logistics team + Rahul
    • infrastructure - wireless
    • canteens remaining open for all 3 days
    • stalls for food allowed near conf rooms?
    • set up a fedora mirror


  • Barcamp voting
    • Need to have printers, stationery at site on first day.
    • Stick schedule per room and per day on each room.
    • Possibly display schedule on projector
    • Suggestion: Have 2-3 keynotes (talks w/o parallel tracks) on first day and ask participants to vote online or on a board somewhere before breaking for lunch. This can reduce confusion.

  • Fudcons generally have 4 parallel tracks
  • Keep a schedule ready 2 days before event; minor changes allowed after voting on first day.
This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-07-14 13:16:00 (Updated 2011-07-14 13:16:31) from Amit Shah

FUDCon APAC 2011: Pune, Nov 4-6

Jared Smith, the Fedora Project Leader, has announced the Pune bid has won for the APAC FUDCon for 2011.


https://fedoraproject.org/wiki/FUDCon:India_2011

If you're planning to attend, there's information on travel and costs on the bid page above.  A few community volunteers who will speak at the event can be sponsored, subject to budget restrictions.

Make sure to get your proposed talks or hackfests listed on the link above.  We already have a healthy list of topics; I'm eagerly looking forward to the event.

For people local to Pune, you can help organising the event. Please contact Rahul Sundaram, the event owner, or send an email to the fedora-india mailing list for details.

This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-07-08 04:55:00 (Updated 2011-07-08 04:55:44) from Amit Shah

Japanese Tragedy

It's a terrible tragedy that has struck Japan: first the earthquake, and then the tsunami. The earthquake shocks were handled quite well by the structures: it was a very high severity earthquake. However, the tsunami that followed later caused a lot of damage: water flooding the streets, houses floating on water, cars and ships being washed away. The most destructive effect, however, could be the

Rather than believe the media's hype, here are a few websites that can help you track the developments at the nuclear reactors:

http://iaea.org/newscenter/news/2011/tsunamiupdate01.html

That is the International Atomic Energy Agency website providing updates on happenings at Japan.

http://mitnse.com/

That is the website by MIT's Nuclear Science and Engineering department that's providing updates and information on the happenings at Japan.

It's best not to panic; the radiation levels are not high as of now but it's safest to take precautions and help others.


PS: I'm amazed by the restraint shown by the Japanese people themselves; images show they're swamped with water, snow and aftershocks but they're still helping each other and there's no looting in the streets.  Excellent! I hope we can re-get that culture in India, but it looks like we've moved too far away; people injured in accidents are left lying on the roads with vehicles making their way around them. Forget about taking them to the hospital, they aren't even taken the the side of the roads.. as a visitor recently said: "Pune has lost its humanity."

This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-03-18 12:34:00 (Updated 2011-03-18 12:34:49) from Amit Shah

On Mind Maps

I wrote an article on mind maps in the BenefIT magazine for the March 2011 issue.  The people at BenefIT are nice enough to license the content under a CC license, so I can host the pdf and point you to it:

Mind-maps.pdf

This article talks about how mind maps are beneficial for the thought process and how you can use them to make decisions.

This is my second article that got published in the BenefIT magazine.  I've written one on taking frequent breaks from the computer earlier.  Writing for non-tech, business-oriented people is different, and not very straightforward :-)

This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-03-12 13:43:00 (Updated 2011-03-12 13:43:10) from Amit Shah

Maximum LCD Brightness Lower Than Before?

If you're trying out a kernel newer than 2.6.38-rc6 and find your LCD brightness doesn't go up to its maximum, here's some help:  boot into an older kernel, set the brightness to maximum, then reboot into the newer kernel, and now you'll get the max. brightness that you're used to.

The git commit by Indan Zupancic explains why this happens:


drm/i915: Do not handle backlight combination mode specially

The current code does not follow Intel documentation: It misses some things and does other, undocumented things. This causes wrong backlight values in certain conditions. Instead of adding tricky code handling badly documented and rare corner cases, don't handle combination mode specially at all. This way PCI_LBPC is never touched and weird things shouldn't happen.

If combination mode is enabled, then the only downside is that changing the brightness has a greater granularity (the LBPC value), but LBPC is at most 254 and the maximum is in the thousands, so this is no real functional loss.

A potential problem with not handling combined mode is that a brightness of max * PCI_LBPC is not bright enough. However, this is very unlikely because from the documentation LBPC seems to act as a scaling factor and doesn't look like it's supposed to be changed after boot. The value at boot should always result in a bright enough screen.

IMPORTANT: However, although usually the above is true, it may not be when people ran an older (2.6.37) kernel which messed up the LBPC register, and they are unlucky enough to have a BIOS that saves and restores the LBPC value. Then a good kernel may seem to not work: Max brightness isn't bright enough. If this happens people should boot back into the old kernel, set brightness to the maximum, and then reboot. After that everything should be fine.

For more information see the below links. This fixes bugs:

  http://bugzilla.kernel.org/show_bug.cgi?id=23472 
  http://bugzilla.kernel.org/show_bug.cgi?id=25072

This is a post from http://log.amitshah.net/, licensed CC BY-SA.

Syndicated 2011-03-01 15:38:00 (Updated 2011-03-01 15:45:11) from Amit Shah

Stay Healthy By Taking Breaks

Most of us lead sedentary lifestyles these days -- most of our time is spent in front of computers. This slowly is causing a lot of problems people from previous generations haven't experienced: back aches, knee problems, wrist pains, myopia, among others. And just going to a gym or putting in one hour of physical activity a day isn't enough. It doesn't help balance the inactivity over the entire day.

I recently wrote an article in the BenefIT magazine that talks about two tools: Workrave and RSIBreak. Thanks to the publishers, the article is available in pdf format under a CC license.

I've tried both the software but have been using Workrave for quite a while now and am quite happy with it. To briefly introduce them: both software prompt the user to take a break at regular intervals. They have timers that trigger at configured intervals asking the user to take a break. Workrave also has some stretching exercises suggested that can be performed in the longer breaks. The shorter (and more frequent) breaks can be used to take the eyes off the monitor and to relax them. Read the article for more details.

I've reviewed Workrave version 0.9.1 in the article, though the current version as of now is 0.9.3, which has a few differences from those mentioned in the article. The prime difference is the addition of a 'Natural Rest Break' that gets triggered when the screen-saver gets activated, which is nice since if the user walks away from the computer for a prolonged period of time, the rest break in effect has been taken, and the next one is scheduled after the configured duration once the screen-saver is unlocked.

Both software are available in the Fedora repository: Workrave is based on the GTK toolkit (and integrates nicely with the GNOME desktop), whereas RSIBreak is based on the Qt toolkit (and integrates nicely with the KDE desktop). Give these software a try for a cheap but effective way of staying healthy!

Syndicated 2011-01-21 20:22:00 (Updated 2011-01-21 20:22:19) from Amit Shah

Idea: Faster Metadata Downloads With Yum and Git

The presto plugin for yum has worked great for me so far.  It's been very useful, not for the lack of download limits, but for the time saved in getting the bits downloaded.  The time saved is significant if the bandwidth is not too good (it never is).

However, I've observed in some cases the presto metadata is larger than the actual package size in some cases -- e.g., a font.  If a font package, say 21KB in size, has a deltarpm of 3KB in size, it results in a savings of 18KB of downloads.  This is a very impressive 85% of savings.  However, the presto metadata itself could be more than 400KB, nullifying the advantage of the drpm.  We're effectively downloading, in this corner case, 418KB instead of 21KB.  That is 19 times of what of the actual package size.

So here's an idea: why not let git handle the metadata for us?  The metadata is a text (or sqlite) file that lists package names, their dependencies, version numbers and so on.  Since text can be very easily handled by git, it should be a breeze fetching metadata updates from a git server.  At install-time (or upgrade-time), the metadata git repository for a particular Fedora version can be cloned, and on each update, all that's necessary for yum to do is invoke 'git pull' and it gets all the latest metadata.  Downloads: a few KB each day instead of a few MBs.

The advantages are numerous:

  • Saves server bandwidth
  • Uses very less server resources when using the git protocol
  • Scales really well
  • Compresses really well
  • Makes yum faster for users
    • I think this is the biggest win -- not having to wait ages for a 'yum search' to finish everyday has to get anyone interested.  Makes old-time Debian users like me very happy.
There are some challenges to be considered as well:
  • Should the yum metadata be served by just one canonical git server, while the packages get served by mirrors?  Not each mirror may have the git protocol enabled nor can the Fedora project ask each mirror to configure git on the server.
    • Doing this can result in slow mirrors not able to service package download requests for the latest metadata
    • This can be mitigated by using git over http over the server
  • The metadata can keep growing
    • This can be mitigated by having a separate git repository for the metadata belonging to each release.  Multiple git repos can be set up easily for extra repositories (e.g., for external repos or for multiple version repos while doing an upgrade).
  • The mirror list has to be updated to also include git repositories that can be worked on with 'git remote'.
I've filed an RFE for this feature.  For someone looking for a weekend hack for yum in python, this should be a good opportunity to jump right in!  If you intend to take this up, get in touch with the developers, make sure no one else is working on this yet (or collaborate with others) and update the details on the Fedora Feature Page.

Syndicated 2010-12-30 20:58:00 (Updated 2010-12-30 20:58:48) from Amit Shah

Book review: The Grand Design

I just finished reading Stephen Hawking and Leonard Mlodinow's 'The Grand Design' (wikipedia link; Amazon link here). It's a great book to get up to speed on where physics stands as of today in our understanding of the universe.

Physicists come up with theories to explain why the world behaves the way it does. Those which show promise continue to be tested with new observations. Some of the theories stand the test of a few real-life situations, some don't. Some make sense in particular settings, some don't. Some are easily understandable by the layperson, some are not. All this doesn't mean that the theories which don't make sense or which don't stand up to real-world tests or observations are wrong. They just make sense in a particular setting and we use them to accurately model our world in that setting. We use other theories to explain other facets of our world. Or even the same ones, when put under a magnifying glass. If you think this doesn't make sense, the book will make it understandable. If you think it sounds crazy, it is, and the book will tell you why. If you think physicists are going mad, well, I don't think they are, unless you mean they're going mad in the search of the one true answer to life, the universe and everything that's beyond "42". (Yes, the authors are cool enough to include the Hitchhiker's reference (Amazon link) as well.)

The writing is very clear. The first two chapters can be read and understood by people who have not taken advanced courses in science. They're very clearly written and explained. These chapters lay the foundation for the details in the next six chapters.

Things start getting interesting and slightly complicated progressively in each chapter from chapter 3 onwards. Obviously, since the concept of quantum theory starts getting introduced.

The authors use great everyday analogies in explaining complex phenomena. They also make good use of humour to keep the readers engaged and the tone light. There are no equations used in the book, so they don't alienate people who have studied science back in their school and college years but have lost touch of it since. (Stephen Hawking mentions an editor telling him that for each equation he uses in 'A Brief History of Time' (Amazon link), he'll lose half the readership. I think that's a brilliant way to make the text easily accessible and understandable.)

I read about physics after a really long time. I don't even remember reading or studying the quantum theory. But I guess I would have. However, at many points while reading the book, I felt if I had such a resource by my side while studying for my engineering classes, it would have done a much better job at arousing and sustaining my interest in classical and theoretical sciences. I came up with a few questions while going through the text only to be explained later on in some cases, or the topics not broached upon by the authors for want of simplicity. I'm sure I can get the answers to some of the questions I have by poking around in very detailed literature on the topics. I'm glad I've retained my inquisitive nature when it comes to the sciences and also that I can raise questions that aren't answered in simple terms.

To conclude, this is a great book for people without science background wanting to learn about our universe, how it was formed, how it came into being by reading the first two or three chapters and glossing over the rest. It's a great book for people who have studied physics but lost touch with it to recollect some theory and understand the current understanding of the physicists on how the universe formed and why things are the way they are.

I haven't read 'A Brief History of Time' by Stephen Hawking nor the updated 'A Briefer History of Time' (Amazon link) by Stephen Hawking and Leondard Mlodinow, the authors of 'The Grand Design'. I guess that book would be the right starting point before one reads this book, but I didn't find myself getting lost too much. Perhaps it helps others. I intend to read 'A Brief History of Time', which I own for quite a while now, in the near future.

It's difficult being a genius and figuring out how the universe works and trying to put together its past and determining the future. It's doubly difficult to write about it in a way that laypersons can understand. Kudos to the Stephen Hawking and Leonard Mlodinow and the team behind 'The Grand Design' for doing just that.

PS:  I'm running an experiment again, this time with links to amazon product pages.  I'm putting the amazon links separately so you know you'll go to a company's site.  Let me know how this works -- does the '(Amazon link)' text hurt the flow?  Do you want links to Amazon product pages at all?  Should I make the Amazon link the default?

Syndicated 2010-12-30 19:43:00 (Updated 2010-12-30 19:43:45) from Amit Shah

Fedora Miniconf and foss.in/2010

A very delayed post on the Fedora Miniconf and foss.in/2010.

foss.in/2010 was held on the 15th, 16th and 17th of this month in Bengaluru. I could confirm my attendance very late, so I missed out on the CfP and a chance at speaking in the main conference, but I could manage to get a speaking slot in the Fedora miniconf. Thanks to Rahul for accomodating me at a short notice.

One of the main things I was looking forward to was meeting my team-mate Juan Quintela. Though we met recently at the KVM Forum 2010, I was going to use this opportunity to catch him and discuss some of the things I'm working on that overlap with his domain, virtual machine live migration, and get things going.

The other thing was to get to know more people -- Fedora users and developers from India who I've spoken with on the irc channel but not met, other developers and users of free software from around the world. Add to that a few people who I've worked with and not met and also people whose software I use daily and who I want to thank for working on what they do.  It was also nice meeting the old known faces from the IBM LTC in Bengaluru -- Balbir Singh, Kamalesh Babulal, Vaidy, Aneesh K. V., et al.

It's always a certainty that there will be users of virtualization (particularly kvm) stack and it's nice to get a feel of how many people are using kvm, in what ways, how well it works for them, and so on. That's always a motivation.

The Fedora miniconf was on the 16th. The schedules for talks for miniconfs aren't published by the foss.in people, so it was left to us to do our advertising and crowd-pulling. Rahul had listed the speakers and the talks on the Fedora foss.in/2010 wiki page. I went ahead and took out a few print-outs for the talks and assigned time slots for each talk depending on the suggested length given by the speakers for their talks as well as the slot allotted to the Fedora Project for the miniconf. The print-outs of the schedules were meant to be pasted around the venue to attract attention to the remotest section that was to host the miniconf, Hall C. However, we just ended up keeping the printouts as handouts at the Fedora stall that we set up. The Fedora stall was quite a crowd-puller. And since it was set up on the second day, we didn't have to compete with the other stalls since they had their share of attendance on the first day.

The other members of the Fedora crowd, Rahul, Saleem, Arun, Shreyank, Aditya, Suchakra, Siddhesh, Neependra, ... have written about the Fedora stall and their experiences earlier (and linked to from the Fedora foss.in/2010 page).

The Fedora miniconf was a great success, going by the attendance and the participation we had. My talk was the first, and I could see we had a full house. I think my talk went quite well. It could have been a little disappointing for people who expected demos, but I wanted to aim this talk towards people who had a general sense of using and deploying Fedora virt as well as Fedora on the cloud and also at people who would go and do stuff themselves rather than being given everything on a silver platter. This does resonate also with the foss.in philosophy of recent years of being a contributor-oriented conference rather than a user-originted one, so I didn't mind doing that. Gauging by the response I got after the talk, I believe I was right in doing that. (I even got one email mentioning it was a great talk by the CEO of a company).

The other talks from the Fedora miniconf were engaging, I learnt quite a bit from what the others are up to. Arun's talk on packaging emacs extensions was entertaining. He connects with the audience, I liked that about him.

Aditya's talk on Fedora Summer Coding was a good call to students to participate in the free software world via Fedora's internship programme. He narrated his own experience as a Fedora Project intern, which touches the right chords of the intended audience. I think doing more such talks will get him over the jitters of presenting to a big crowd.

Suchakra's doing good work on accessing an embedded Linux box via a console inside a browser tab -- it's a very interesting project.

Neependra's talk was a good walk-through of using tracing commands to see what really happens in the kernel when a userspace program runs. He walked through the 'mkdir' command and showed the call trace. This was a good demo. He spoke about the various situations in which tracing tools could be used, not just for debugging, and that should have set people's thoughts in motion as to how they could get more information on how the system behaves instead of just using a system.

Shreyank's talk on creating a web tool for managing student projects and the Fedora Summer of Code was interesting as well. It was nice to see the way an actual student project was designed and developed and how it's going to make future students' and mentors' lives easier. This talk should have served as a good introduction to the flow and process students have to go through in applying, starting, reviewing and completing their project.

Apart from the Fedora miniconf, I attended a few sessions in the main conf. James Morris's keynote on the history of the security subsytem in the Linux kernel was very informative. Rahul's keynote on the 'Failures of Fedora' was totally packed with anecdotes and analyses of the decisions taken by the Fedora project and their impact on the users and developers. Fedora (earlier Red Hat Linux) is one of the oldest distributions around, and any insights into the functioning and data as to what works and what does not is a great source of information to look for building engaging communities of users and contributors.

Lennart's two talks on systemd and the state of surround sound on Linux were not very new to me. However, there were a few bits in there that provided some food for thought.


Juan's talk on live migration was packed full of experiences in getting qemu to a state where migration works fairly well. He also spoke about all the work that's left to do. It was totally technical and I think the people who were misguided by it being labelled as a 'sysadmin' talk or by the title (expecting to migrate from an older physical machine to a newer physical machine w/o downtime) quickly left the hall. Whoever stayed back were either people who work on QEMU/KVM (esp. the folks from the IBM LTC) or people too polite to walk out.

Dimitris Glezos's talk on building large-scale web applications was a very informative one for me. I've never done web programming (except for html, css and a bit of php ages ago), and this was a good intro for me to understand what various web development frameworks there are, their pros and cons, the way to deploy them, the way to structure them, etc. It was evident he took a lot of effort to prepare the slides and the talk, it was totally worth it.

Danese Cooper's keynote on the Wikimedia Foundation was an equally informative talk. She spoke on a wide range of topics, including the team that makes up Wikimedia, their servers and datacentres, their load balancing strategy, their backup systems, their editing process, their localisation efforts, their search for a new mirror site in the APAC region, etc. I was interested in one aspect, machine-readable wikipedia content, to which they had a satisfactory answer: they're migrating to semantic web content and would look at a machine-readable API once they're done adding semantics to their content.

The other time was spent at the Fedora booth and talking to Juan and the other friends.

The foss.in team announced this would be the last foss.in, so thanks to them for hanging around so long. To fill the void, we're going to have to step up and organise a platform for like-minded people from the free/open source software community around here. I've been part of organising some events earlier in different capacities, and I'm looking forward to being part of an effort that provides such a platform. There's a FUDCon being planned for next year in Pune, I'll be involved in it, and will take things along from there.

Syndicated 2010-12-30 05:21:00 (Updated 2010-12-30 05:21:43) from Amit Shah

43 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!