Older blog entries for adulau (starting at number 116)

6 Mar 2011 (updated 18 May 2011 at 22:10 UTC) »

2011-03-06 Why The Philosophical Works Should Be Free

A close look at a Welsh onion flower

Roberto Di Cosmo recently published a work called "Manifeste pour une Création Artistique Libre", the work is not really a manifesto in the traditional sense but more a work about the potential licensing scheme at the Internet age. My blog entry is not about the content of the work itself but more about the non-free license used by the author. On the linuxfr.org website many people (including myself) made comments about how strange is to publish a work about free works while the manifesto itself is not free (licensed under the restrictive CC-BY-NC-ND). The author replies to the questions explaining his rationals to choose the non-free license with an additional "non printing" clause to the CC-BY-NC-ND.

I have a profound respect to Roberto's works regarding the promotion and support to the free software community but I clearly disagree with the facts stating philosophical works must not have any derivative and cannot be a free work. I also know that Richard Stallman disallows derivative work on his various works. If you carefully check the history of philosophical works, there are a lot of essays from various philosophers having some revision due to external contributions (e.g. Ivan Illich has multiple works evolving over time due to interaction or discussions with people). It's true that the practice was not very common to publish about the evolution of the works. But that was mainly due to the slowness of the publishing mechanisms and not by the works themselves.

The main argument used to avoid freeing the works is usually the integrity of the author's work. A lot of works have been modified over time to reflect the current use of the language or make a translation to another language. Does this affect the integrity of the author's work? I don't think so. Especially for any free works (including free software) attribution is required in any case. So by default, the author (and the reader) would see the original attribution and the modification over time (recently improved in the free software community by the extensive use of distributed version control system like git).

Maybe it's now the time to reconsider that free software is going far beyond the simple act of creating software but also touching any act of thinking or creation.

Tags: freedom freesociety society copyright freesoftware

Syndicated 2011-03-06 21:35:57 (Updated 2011-05-18 22:10:50) from AdulauWikiDiary: RecentChanges

5 Mar 2011 (updated 27 Mar 2011 at 16:13 UTC) »

2011-03-05 Monitoring Memory of Suspicious Processes

Monitoring The Memory of Suspicious Processes

If you are operating many GNU/Linux boxes, it's not uncommon to have issues with some processes leaking memory. It's often the case for long-running processes handling large amount of data and usually using small chunk of memory segment while not freeing them back to the operating system. If you played with the Python "gc.garbage" or abused the Perl Scalar::Util::weaken function but to reach that stage, you need to know which processes ate the memory.

Usually looking for processes eating the memory, you need to have a look at the running process using ps, sar, top, htop… For a first look without installing any additional software, you can use ps with its sorting functionality:

%ps -eawwo size,pid,user,command --sort -size | head -20
 SIZE   PID USER     COMMAND
224348 32265 www-data /usr/sbin/apache2 -k start
224340 32264 www-data /usr/sbin/apache2 -k start
162444  944 syslog   rsyslogd -c4
106000 2229 datas     redis-server /etc/redis/redis.conf
56724 31034 datap    perl ../../pdns/parse.pl
32660  3378 adulau   perl pdns-web.pl daemon --reload
27040  4400 adulau   SCREEN
20296 20052 unbound  /usr/sbin/unbound
...

It's nice to have a sorted list by size but usually the common questions are:

  • Is that normal?
  • What's the evolution over time?
  • Does the value increased or reduced over time?
  • Which memory usage is evolving badly?

My first guess was to get the values above in a file, add a timestamp in front and make a simple awk script to display the evolution and graph it. But before jumping into it, I checked in Munin if there is a default plugin to do that per process. But there is no default plugin… I found one called multimemory that basically doing that per process. To configure it, you just need to add it as plugin with the processes you want to monitor.

[multimemory]
env.os linux 
env.names apache2 perl unbound rsyslogd

If you want to test the plugin, you can use:

%munin-run multimemory
perl.value 104148992
unbound.value 19943424
rsyslogd.value 162444
apache2.value 550055

You can connect to your Munin web page and you'll see the evolution for each monitored process name. After that's just a matter of digging into "valgrind --leak-check=full" or use your favorite profiling tool for Perl, Ruby or Python.

Monitoring Memory of Suspicious Processes

Tags: unix command-line memory monitoring

Syndicated 2011-03-05 11:16:52 (Updated 2011-03-27 16:13:08) from AdulauWikiDiary: RecentChanges

2011-01-01 Often I m wrong but not always

A shaky night

Often I'm Wrong But Not Always...

Prediction is very difficult, especially if it's about the future. Niels Bohr

Usually at the beginning of the year, you see all those predictions about the future technology or social comportment in front of those technologies. In the information security field, you see plenty of security companies telling you that there will be much more attacks or those attacks will be diversified targeting your next mobile phone or your next-generation toaster connected to Facebook. Of course! More malware or security issues will pop up especially if you increase the number of devices in the wild, their number of wild users and especially those wild users waiting to get money fast. So I'll leave up to the security companies waiting to make press release about their marketing predictions.

As we are at the beginning of a new numerical year, I was cleaning up a bit my notes in an old Emacs folder (from 1994 until 2001). I discovered some interesting notes and some drawings and I want to share a specific one with you.

In my various notes, I discovered an old recurring interest for Wiki-like technologies at that time. Some notes are making references to some Usenet articles (difficult to find back) and some references to c2.com articles how a wiki is well (un)organized. Some notes were unreadable due to the lack of the context for that period 1. There is even a mention to the use of a Wiki-like in the enterprise or building a collaborative Wiki website for technical FAQ. There are some more technical notes about the implementation of the software to have a wiki-like FAQ website including a kind of organization by vote. I let you find the today's website doing that…

Suddenly, in the notes, there is a kind of brainstorm discussion about the subject. The notes include some discussion from myself and from other colleagues. And there is an interesting statement about Wiki-like technology from a colleague : it's not because you like the technology that other people will use it or embrace it. That's an interesting point but the argument was used to avoid doing something or invest some times in Wiki-like approach. Yes, this is right but the question is more on how you are making stuff and how people would use it. My notes on that topic ended up with the brainstorm discussion. A kind of choke to me…

What's the catch? Not doing or building something to test it out. You can talk eternally about an idea if it is good or bad. But the only way to know if this is a good or bad idea is to build the idea. I was already thinking like that but I forgot that it happened to me… Taking notes is good especially when you learned that you should pursue and transform your ideas in a reality even with the surrounding criticisms.

My conclusion to those old random notes would be something like this:

If you see something interesting and you get a strong conviction that could succeed in one way or another, do or try something with it. (please note the emphasis on the do)

Looks like, I'll keep again this advise for the next years…

contribute startup innovation notes internet

Syndicated 2011-01-01 11:03:52 from AdulauWikiDiary: RecentChanges

28 Nov 2010 (updated 2 Dec 2010 at 20:08 UTC) »

2010-11-28 Why Do We Need More Wikileaks and Cryptome

Can you escape?

Why Do We Need More WikiLeaks and Cryptome ?

In the past months, there were many articles in favor (Foreign Policy (2010-10-25), Salon (2010-03-17)) or against (Washington Post (2010-08-02), Jimmy Wales "WikiLeaks Was Irresponsible" (2010-09-28) or Wikileaks Unlike Cryptome) or neutral (New Yorker (2010-06-07)) concerning WikiLeaks. Following public or private discussions about information leaking website like WikiLeaks? or even about the vintage Cryptome website, I personally think that the point is not about supporting or not a specific leaking website but more supporting their diversity.

Usually the term "truth" mentioned at different places when talking about "leaking website" but they just play a role to provide materials to build your own "truth". And that's the main reason why we need more "leaking website", you need to have measurable and observable results just like for a scientific experiment. Diversity is an important factor, not only in biology, but also when you want to build some "truth" based on leaked information. Even if the leaked information seems to be the same raw stream of bytes, the way to disclose it is already a method on interpretation (e.g. is it better to distribute to the journalists 4 weeks before? or is it better to provide a way for everyone to comment and analyze at the same time all the leaked information?). As there is no simple answer, the only way to improve is to try many techniques or approaches to find you by yourself what's the most appropriate.

What should we expect for next generation "leaking website"?

I don't really know but here is some thoughts based on reading from HN or some additional gathered from my physical and electronic readings.

  • Collaborative annotation of the leaked information (e.g. relying on the excellent free software Annotator from Open Knowledge Foundation)
  • Collaborative voting of leaked information, when you have a list of contributors annotating regularly, you may derive a list of voters for the information to leak or not information. This could be useful when there are doubts about sources for example.
  • Collaborative distribution. A lot of the existing site are based on a centralized platform sensible to easy shutdown. (an interesting HN discussion about distributed content for WikiLeaks where another free software like Git could play a role too.)

We are just at the beginning of a new age of information leakage that could be beneficial for our societies. But the only way to ensure the benefit, we have to promote a diversity and not a scarcity of those platforms.

Update 20101202: It seems that some former members of WikiLeaks? decided to make an alternative platform (source Spiegel). Diversity is king and especially for interpretation of leaked information.

Tags: freedom internet society transparency wikileaks

Syndicated 2010-11-28 11:43:53 (Updated 2010-12-02 20:08:19) from AdulauWikiDiary: RecentChanges

2010-10-24 Open Access Needs Free Software

La bibliothèque humaniste de Beatus Rhenanus / Humanist Library of Beatus Rhenanus

The "Open Access Movement" depends on Free Software

The past week was the Open Access Week to promote the open access to research publication and to encourage the academia to make this as a norm in scholarship and research. The movement is really important to ensure an adequate level of research innovation by easing the accessibility to the research papers. Especially to avoid editor locking where all the research publications when they are not easily accessible and you are forced to pay an outrageous price to just get access. I think Open Access is an inevitable way for scientific research in the future even if Nature (a non-Open Access publication) disagrees.

But there is an interesting paradox in the open access movement that need to be solved especially if it want to preserve their existence on the long run. The access must go further than just the access to the papers but to the infrastructure permitting the operation of open access. As an example, one of the major open access repository called arXiv where physics, chemistry or computer science open access papers are stored. arXiv had some funding difficulties in late 2009. What happens to those repositories when they run out of funding? A recent article in linuxFR.org about open access forgot to mention about the free software aspect of those repositories? Why even promoters of free software forget to mention about the need of free software infrastructure for open access repositories? Where is the software back-end of HAL (archive ouverte pluridisciplinaire) or arXiv.org?

Here is my call:

  • Open access repository must rely on free software to operate and to ensure long-term longevity
  • Open access repository must provide a weekly data set including submitted publications and the linked metadata to ensure independent replication (where community could help)

Open access is inspired by the free software movement but somehow forgets that its own existence is linked to free software. Next time, I see and I enjoy a new open access project in a specific scientific field. I will ask myself about their publication repository and its software.

Tags: freesoftware science openaccess research

Syndicated 2010-10-24 14:33:55 from AdulauWikiDiary: RecentChanges

2010-08-15 Free Software Is Beyond Companies

Chardon bleu (Echinops Ritro)

Free Software Is Beyond Companies

After the recent Oracle to dismiss their free software strategies, there is always this discussion about free software and its viability in large corporation. But I strongly believe that the question is not there. The question is not the compatibility of free software with large corporations or some business practices. What is so inherently different in free software is the ability to provide free/convivial1 "tools" (as described by Ivan Illich) for everyone including large corporation.

In the recent GNOME Census, a lot of news articles, show the large or small contribution of various companies. But the majority of contributions are still done by volunteers and some are paid by small or large corporation. This doesn't mean that the company behind the funding of the author is always informed of the contribution and that the company is doing that for the inner purpose of free software.

Another interesting fact is free software authors always tend to keep "their" free software with them when moving from one company to another one. Free software authors often use companies as a funding scheme for their free software interest. Obviously companies enjoyed that because they found a way to attract talented people to contribute directly/indirectly to the company interests. But when the mutual interest is going away, authors and companies are separating. It's usually when you see forks appearing or/and corporations playing different strategies (e.g. jumping into aggressive licensing or stopping their open technological strategy).

Is that bad or good for free software? I don't know but this generates a lot of vitality into the free ecosystem. Meaning that free software is still well alive and contributors keep working. But this clearly show the importance of copyright assignment (or independent author copyright) and to be sure that the assignment is always linked with the interest of the free society to keep the software free.

Associated reading (EN) : Ivan Illich, Tools For Conviviality

Lecture pour en savoir plus (FR) : La convivialité, Ivan Illich (ISBN 978-2020042598)

Tags: freedom freesociety society copyright freesoftware

Syndicated 2010-08-15 09:48:17 from AdulauWikiDiary: RecentChanges

19 Jun 2010 (updated 19 Jun 2010 at 11:16 UTC) »

2010-06-19 Searching Google From The Command Line

Bookseller in Rennes

(shortest path for) Searching Google From the Command Line

Looking at the recent announce from Google about their "Google Command Line Tool", this is nice but missing a clear functionality : searching Google… I found various software to do it but it's always relying on external software or libraries and not really the core Unix tools. Now can we do it but just using standard Unix tools? (beside "curl" but this can be even replaced by a telnet doing an HTTP request if required)

To search google from an API, you can use the AJAX interface to do the search (as the old Google search API is not defunct). The documentation of the interface is available but the output is JSON. JSON is nice for browser but again funky to parse on command line without using external tools like jsawk. But it's still a text output, this can be parsed by the wonderful awk (made in 1977, a good year)… At the end, this is just a file with comma separated value for each "key/value". After, you can through away the key and you display the value.

curl -s "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&start=0&rsz=large&q=foo" | sed -e 's/[{}]/''/g' | awk '{n=split($0,s,","); for (i=1; i<=n; i++) print s[i]}' | grep -E "("url"|"titleNoFormatting")" | sed -e s/\"//g | cut -d: -f2-

and the result output :

http://en.wikipedia.org/wiki/Foobar
Foobar - Wikipedia
http://en.wikipedia.org/wiki/Foo_Camp
Foo Camp - Wikipedia
http://www.foofighters.com/
Foo Fighters Music
...

Now you can put the search as a bash function or as an alias (you can replace foo by $1). Do we need more? I don't think beside a Leica M9…

Tags: google command-line unix internet search

Syndicated 2010-06-19 10:14:31 (Updated 2010-06-19 11:16:18) from AdulauWikiDiary: RecentChanges

24 May 2010 (updated 27 Jun 2010 at 10:17 UTC) »

2010-05-24 Information Wants To Be Free Is Becoming An Axiom

Northern Gannet (fou de Bassan) on Rouzic Island

"Information wants to be free" is now becoming an axiom

The last article "Saying information wants to be free does more harm than good" from Cory Doctorow on guardian.co.uk rings a bell to me. It seems that we still don't often understand what's the profound meaning of this mantra or expression is. One of the origin for this expression could be around the fifties from Peter Samson claimed : Information should be free.

When Steven Levy published his book : "Hackers, heroes of the computer revolution", the chapter "The Hacker Ethic" includes a section called "All information should be free" in reference to The Tech Model Railroad Club (TMRC) where Peter Samson was a member. The explanation made by Steven Levy:

The belief, sometimes taken unconditionally, that information should be free was a direct tribute to the way a splendid computer, or computer program, works: the binary bits moving in the most straightforward, logical path necessary to do their complex job. What was a computer but something which benefited from a free flow of information? If, say, the CPU found itself unable to get information from the input/output (I/O) devices, the whole system would collapse. In the hacker viewpoint, any system could benefit from that easy flow of information.

A variation of this mantra was made by Stewart Brand in a hacker conference in 1984 :

On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other

We could even assume that the modified mantra was a direct response to Steven Levy's book and to his chapter "The Hacker Ethic" (ref. mentioned in a documentary called "Hackers - Wizards of the Electronic Age"). The mantra or the aphorism was used in past twenty-five years by a large community. The application of the mantra by the GNU project is even mentioned in various documents including again the book from Steven Levy.

Regarding the last article from Cory Doctorow, why he doesn't want that make an emphasis on the information but on people's freedom. I agree to that point of view but the use of "information wants to be free" is a different matter. I want to take it on a different angle, information is not bound to physical properties like the physical objects are. By the effect of being liberated from the physical rules, information tends to be free.

Of course, this is not real axiom but it's not far away from being an axiom. If you are looking for the current issues in "cyberspace", this is always related to that inner effect of information. Have you seen all the unsuccessful attempts to make DRM (digital restrictions management or digital rights management depending of your political view) working? All attempts from the dying music industry to shut-down OpenBitTorrent or any open indexing services? or even the closing of newzbin where at the same time the source code and database leaked? or the inability to create technology to protect privacy (the techniques are not far away from the missing attempts done by DRM technologies)?

Yes, "information wants to be free", just by effect and we have to live with that fact. I personally think it is better to abuse this effect than trying to limit the effect. It's just like fighting against gravity on earth…

Tags: freedom information free freesoftware

Syndicated 2010-05-24 16:55:44 (Updated 2010-06-27 10:17:02) from AdulauWikiDiary: RecentChanges

2010-05-03 Information Action Ratio

Street-Art in Rennes #17

Information-action Ratio or What's Your Opinion About Belgian Politics?

Listening to the Belgian news is often a bit surreal (this makes sense in the country of surrealism) as they talk about problems that we don't really care or this is not really impacting the citizen. Even if the media are claiming that Belgian politics (and by so the crisis just created by some of them) are affecting our life. But if you are listening to every breaking news, this is a majority of useless information that you can't use to improve your life or the society. Neil Postman described this in a nice concept : Information-action Ratio:

In both oral and typographic cultures, information derives its importance from the possibilities of action. Of course, in any communication environment, input (what one if informed about) always exceeds output (the possibilities of action based on information). But the situation created by telegraphy, and the exacerbated by later technologies, made the relationship between information and action both abstract and remote.

You can replace telegraphy by your favourite media but this is a real issue of the current news channel (e.g. television, radio,…). The information is so distant from what you are doing everyday. We can blame our fast channel of communication being very different compared to a book or an extensive article on a specific subject where the information is often well organized and generating thinking (that can lead to action). Why are we listening to information that we don't care? Why are we giving so much importance to that useless information? I don't have a clear answer to that fact. I'm sure of at least something, instead of listening/viewing useless information in the media like Belgian politics, I'll focus more on the media (including books) increasing my information-action ratio.

If you want to start in that direction too, I'll recommend Amusing Ourselves To Death written by Neil Postman.

Tags: contribute media belgium

Syndicated 2010-05-03 20:20:32 from AdulauWikiDiary: RecentChanges

2010-02-14 Contribute Or Die

Stacking books until...

Contribute or Die?

In the past years, I participated to plenty of meetings, conferences or research sessions covering technical or even non-technical aspects of information security or information technology. When looking back and trying to understand what I have done right or wrong and especially, what's the successful recipe in any information technology project. I tend to find a common point in any successful (or at least partially successful) project : making concrete proposals and build them at the very early stage of the project.

A lot of projects have the tendency to become the meeting nightmare with no concrete proposal but just a thousand of endless critiques of the past, present and future. Even worst, those projects are often linked to those "best practices" in project management with an abuse of the broken Waterfall model. After 3 or 4 months of endless discussion, there is no single prototype or software experiment just a pile of documents making happy any committee but also many angry software engineers.

If you are looking at successful (free and non-free) software projects or favourable standardization processes, it's always coming from real and practical contribution. Just look at the IETF practices compared to the "design by committee" methodology, practical approaches are usually winning. Why? because you can see the pitfalls directly and reorient the project or the software development very early.

There is no miracle or silver bullet approaches for having successful project but the only way to make a project better is to make errors as early as possible. It's difficult or near impossible to see all errors in those projects until you'll get your hand dirty. This is the basis of trial-and-error, you have to try to see if this is an error or not. If you don't try, you are just lowering your chance to hit errors and improve your project, software or even yourself.

So if you are contributing, you'll make error but this is much more grateful than sitting on a chair and whining about a project sheet not updated or having endless discussion. There is an interesting lightning talk at YAPC::EU in 2008 : "You aren't good enough" explaining why you should contribute to CPAN. I think this is another way to express the same idea : "contribute, make code, prototype and experiment" even if this is broken, someone else could fix it or start another prototype based on your broken one. We have to contribute if we want to stay alive…

Tags: contribute startup innovation

Syndicated 2010-02-14 21:45:31 from AdulauWikiDiary: RecentChanges

107 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!