Older blog entries for adulau (starting at number 118)

4 Sep 2011 (updated 26 Jun 2012 at 19:03 UTC) »

2011-09-04 Information Security Is Not a Matter of Compliance

A Radio On a Piano

Information Security Is Not a Matter Of Compliance But a Matter of Some Regular and Boring Activities

Making conclusions from experience is not always a scientific approach but a blog is a place where to share experience. Today, I would like to share my past experience with information security and especially how much it's difficult to reach some security with the specific compliance detour proposed by the industry or even the society.

Compliance is a Different Objective Than Information Security

Many compliance mechanisms exist in the information security to ensure on paper the security of a service, a company, a process. I won't list all of them but you might know PCI-DSS, TS 101 456, ISO/IEC 27001 and so on… Very often the core target of a company is to get the final validating document at the end of the auditing process.

Of course, many of those validation processes are requiring many strong security requirements on the procedural aspect of the information security management within the company. This is usually a great opportunity for the information security department to increase somehow their budget or their attraction. Everything is nice. But usually when the paper work is finished, the company got their golden certificate and the investment in information security is just put aside.

But concrete information security is composed of many little dirty jobs that no one wants really do. Usually in the compliance documents those tasks are underestimated (e.g. a check-box at the end of a long list) or even not mentioned (e.g. discarded during the risk assessment because they seem insignificant). Those tasks are usually a core part of information security. Not only for protecting but also to detect misuse earlier.

I summarized the tasks in three large groups (it's not an exhaustive view) but show some of the core jobs to be performed in the context of protecting information systems:

Reading and Analyzing Never Ending Log Files

The log analysis is usually the main trigger to find a compromised system. When Clifford Stoll found that the system was compromised at LBL, it was due to a specific 75 cents accounting issue. Like the recent security breach at kernel.org discovered by an error in the log from a non-installed software (Xnest) or a pop up of an invalid certificate, that's how infection or compromised infrastructure get discovered.

But to discover those discrepancies, you need someone at the end. The answer, here, is not a machine to read your logs (I already hear the SIEM vendors claiming this can be automatized). It's a human having some knowledge (with some doubts) to pick something unusual that can lead to the detection of something serious.

The log analysis is a tedious work that needs curious and competent people. It's something difficult to describe in a compliance document. The analysis job can be boring and not really rewarded. That's why sometime you see the idea of "outsourcing" log analysis but can an outsourced analysis detect an accounting issue because he knows that some user is not working during that time shift?

IMHO, it's sometime better to invest into people and promote the act of regular logs analysis than pursue into an additional security certification without the real security activities associated.

Reducing the Attack Surface

The less software you have the better it is for its security. It sounds very obvious but that's a core concept. We pile more and more features in each software used. I never saw a control in a security standard or certification that recommends to have a policy to reduce software or remove old legacy systems. If you carefully look at "Systems Development Life Cycle", this always shows the perfect world without getting rid of old crappy code.

Maintaining the Software and Hardware

Maintaining software and hardware could fall into the category of "reducing the attack surface" but it's another beast, often under estimated in many security compliance processes. A software is like a living organism, you have to care of it. You don't acquire a tiger and put in your garden without taking care of it. Before maintaining, you obviously need to design systems with "flaw-handling in mind" as Marcus J. Ranum said or Wietse Venema or Saltzer and Schroeder in 1975 . In today's world, we are always not going in that direction so you have to maintain the software to keep out the daily security vulnerabilities.

The main issue with a classical information system is the interactions with the other systems and its environment. If you (as a security engineer) recommend to update a software in a specific infrastructure, you always hear the same song "I can't update it", "It will be done with the yearly upgrade" (usually taking 4 years), "Do you know the impact of this update on my software?" (and obviously you didn't write his software), "It's done" (while checking it's still giving the old version number), "It's not connected so we don't need to patch" (looking at the proxy logs you scare yourself by the volume of data exchanged) and … the classical "it's not managed by us" (while reading the product name in the title of the user who answers that).

Yes, upgrading software (and hardware) is a dirty job, you have to bother people, chase them every days. Even in information security, upgrading software is a pain and you usually break stuff.

All those dirty jobs are part of protecting information systems, we have to do them. Security certification is distracting a lot of professionals from those core activities. I know it's arduous to do them and not rewarded, but we have to do those tasks if we want to make the field more difficult for the attackers.

You might ask why a picture with a radio on a piano… both can do the same "music" but are operated in a different way. Just like information security on a system or an paper are done in two different ways.

Tags: infosec compliance

Syndicated 2011-09-04 16:17:30 (Updated 2012-06-26 19:03:20) from AdulauWikiDiary: RecentChanges

2011-05-22 Ease Your Log Analysis with Ranking

Apocalypse de milieu de terrain / Mittelfeldapokalypse (Tim Ernst)

Ease Your Log Analysis With BGP Ranking and logs-ranking

Raphael Vinot and I worked on a network security ranking project called BGP Ranking to track the malicious activities per Internet Service Provider (referenced with their ASN Autonomous System Number). The project is free software and can be downloaded, forked or updated at GitHub. As BGP Ranking recently reached a beta stage, we have now a nice set of data about the ranking of each Internet service provider in the world. Every day, we are trying to find new ways to use the dataset to improve our life and remove the boring work while doing network forensic.

A very common task when you are doing network forensic is to analyse huge stack of logs files. Sometime, you don't even know where to start as the volume is so important that you end up to look for some random patterns that might be suspicious. I wrote a small software called logs-ranking to prefix each line of a log file (currently only W3c (common/combined) logs files are supported) with the ASN and its BGP Ranking value. logs-ranking uses the whois interface of RIPE RIS to get the origin AS for IP address and the CIRCL BGP Ranking whois interface to get the current ranking.

To use it, you just to stream your log file and specify the log format (apache in this case).

cat ../logs/www.foo.be-access.log|  perl logs-ranking.pl -f apache >www.foo.be-access.log-ranked

and you'll get an output like this with the origin ASN and the ranking (a float value) prefixing the existing log line:

AS15169,1.00273578519859,74.125.... 
AS46664,1.00599888392857,173.242...

So now, you'll be able to sort your logs by the most suspicious entries at first (at least from the most suspicious Internet service provider):

sort -r -g -t"," -k2 www.foo.be-access.log-ranked

So this can be used to discriminate infected clients from Proxy logs that tries to reach bulletproof hoster where the malware C&C is hosted. Or infected machine on Internet trying to infect your latest web-based software… the ranking can be used for other purposes, it's just a matter of imagination.

Tags: networkforensic infosec freesoftware

Syndicated 2011-05-22 19:20:49 from AdulauWikiDiary: RecentChanges

6 Mar 2011 (updated 18 May 2011 at 22:10 UTC) »

2011-03-06 Why The Philosophical Works Should Be Free

A close look at a Welsh onion flower

Roberto Di Cosmo recently published a work called "Manifeste pour une Création Artistique Libre", the work is not really a manifesto in the traditional sense but more a work about the potential licensing scheme at the Internet age. My blog entry is not about the content of the work itself but more about the non-free license used by the author. On the linuxfr.org website many people (including myself) made comments about how strange is to publish a work about free works while the manifesto itself is not free (licensed under the restrictive CC-BY-NC-ND). The author replies to the questions explaining his rationals to choose the non-free license with an additional "non printing" clause to the CC-BY-NC-ND.

I have a profound respect to Roberto's works regarding the promotion and support to the free software community but I clearly disagree with the facts stating philosophical works must not have any derivative and cannot be a free work. I also know that Richard Stallman disallows derivative work on his various works. If you carefully check the history of philosophical works, there are a lot of essays from various philosophers having some revision due to external contributions (e.g. Ivan Illich has multiple works evolving over time due to interaction or discussions with people). It's true that the practice was not very common to publish about the evolution of the works. But that was mainly due to the slowness of the publishing mechanisms and not by the works themselves.

The main argument used to avoid freeing the works is usually the integrity of the author's work. A lot of works have been modified over time to reflect the current use of the language or make a translation to another language. Does this affect the integrity of the author's work? I don't think so. Especially for any free works (including free software) attribution is required in any case. So by default, the author (and the reader) would see the original attribution and the modification over time (recently improved in the free software community by the extensive use of distributed version control system like git).

Maybe it's now the time to reconsider that free software is going far beyond the simple act of creating software but also touching any act of thinking or creation.

Tags: freedom freesociety society copyright freesoftware

Syndicated 2011-03-06 21:35:57 (Updated 2011-05-18 22:10:50) from AdulauWikiDiary: RecentChanges

5 Mar 2011 (updated 27 Mar 2011 at 16:13 UTC) »

2011-03-05 Monitoring Memory of Suspicious Processes

Monitoring The Memory of Suspicious Processes

If you are operating many GNU/Linux boxes, it's not uncommon to have issues with some processes leaking memory. It's often the case for long-running processes handling large amount of data and usually using small chunk of memory segment while not freeing them back to the operating system. If you played with the Python "gc.garbage" or abused the Perl Scalar::Util::weaken function but to reach that stage, you need to know which processes ate the memory.

Usually looking for processes eating the memory, you need to have a look at the running process using ps, sar, top, htop… For a first look without installing any additional software, you can use ps with its sorting functionality:

%ps -eawwo size,pid,user,command --sort -size | head -20
 SIZE   PID USER     COMMAND
224348 32265 www-data /usr/sbin/apache2 -k start
224340 32264 www-data /usr/sbin/apache2 -k start
162444  944 syslog   rsyslogd -c4
106000 2229 datas     redis-server /etc/redis/redis.conf
56724 31034 datap    perl ../../pdns/parse.pl
32660  3378 adulau   perl pdns-web.pl daemon --reload
27040  4400 adulau   SCREEN
20296 20052 unbound  /usr/sbin/unbound
...

It's nice to have a sorted list by size but usually the common questions are:

  • Is that normal?
  • What's the evolution over time?
  • Does the value increased or reduced over time?
  • Which memory usage is evolving badly?

My first guess was to get the values above in a file, add a timestamp in front and make a simple awk script to display the evolution and graph it. But before jumping into it, I checked in Munin if there is a default plugin to do that per process. But there is no default plugin… I found one called multimemory that basically doing that per process. To configure it, you just need to add it as plugin with the processes you want to monitor.

[multimemory]
env.os linux 
env.names apache2 perl unbound rsyslogd

If you want to test the plugin, you can use:

%munin-run multimemory
perl.value 104148992
unbound.value 19943424
rsyslogd.value 162444
apache2.value 550055

You can connect to your Munin web page and you'll see the evolution for each monitored process name. After that's just a matter of digging into "valgrind --leak-check=full" or use your favorite profiling tool for Perl, Ruby or Python.

Monitoring Memory of Suspicious Processes

Tags: unix command-line memory monitoring

Syndicated 2011-03-05 11:16:52 (Updated 2011-03-27 16:13:08) from AdulauWikiDiary: RecentChanges

2011-01-01 Often I m wrong but not always

A shaky night

Often I'm Wrong But Not Always...

Prediction is very difficult, especially if it's about the future. Niels Bohr

Usually at the beginning of the year, you see all those predictions about the future technology or social comportment in front of those technologies. In the information security field, you see plenty of security companies telling you that there will be much more attacks or those attacks will be diversified targeting your next mobile phone or your next-generation toaster connected to Facebook. Of course! More malware or security issues will pop up especially if you increase the number of devices in the wild, their number of wild users and especially those wild users waiting to get money fast. So I'll leave up to the security companies waiting to make press release about their marketing predictions.

As we are at the beginning of a new numerical year, I was cleaning up a bit my notes in an old Emacs folder (from 1994 until 2001). I discovered some interesting notes and some drawings and I want to share a specific one with you.

In my various notes, I discovered an old recurring interest for Wiki-like technologies at that time. Some notes are making references to some Usenet articles (difficult to find back) and some references to c2.com articles how a wiki is well (un)organized. Some notes were unreadable due to the lack of the context for that period 1. There is even a mention to the use of a Wiki-like in the enterprise or building a collaborative Wiki website for technical FAQ. There are some more technical notes about the implementation of the software to have a wiki-like FAQ website including a kind of organization by vote. I let you find the today's website doing that…

Suddenly, in the notes, there is a kind of brainstorm discussion about the subject. The notes include some discussion from myself and from other colleagues. And there is an interesting statement about Wiki-like technology from a colleague : it's not because you like the technology that other people will use it or embrace it. That's an interesting point but the argument was used to avoid doing something or invest some times in Wiki-like approach. Yes, this is right but the question is more on how you are making stuff and how people would use it. My notes on that topic ended up with the brainstorm discussion. A kind of choke to me…

What's the catch? Not doing or building something to test it out. You can talk eternally about an idea if it is good or bad. But the only way to know if this is a good or bad idea is to build the idea. I was already thinking like that but I forgot that it happened to me… Taking notes is good especially when you learned that you should pursue and transform your ideas in a reality even with the surrounding criticisms.

My conclusion to those old random notes would be something like this:

If you see something interesting and you get a strong conviction that could succeed in one way or another, do or try something with it. (please note the emphasis on the do)

Looks like, I'll keep again this advise for the next years…

contribute startup innovation notes internet

Syndicated 2011-01-01 11:03:52 from AdulauWikiDiary: RecentChanges

28 Nov 2010 (updated 2 Dec 2010 at 20:08 UTC) »

2010-11-28 Why Do We Need More Wikileaks and Cryptome

Can you escape?

Why Do We Need More WikiLeaks and Cryptome ?

In the past months, there were many articles in favor (Foreign Policy (2010-10-25), Salon (2010-03-17)) or against (Washington Post (2010-08-02), Jimmy Wales "WikiLeaks Was Irresponsible" (2010-09-28) or Wikileaks Unlike Cryptome) or neutral (New Yorker (2010-06-07)) concerning WikiLeaks. Following public or private discussions about information leaking website like WikiLeaks? or even about the vintage Cryptome website, I personally think that the point is not about supporting or not a specific leaking website but more supporting their diversity.

Usually the term "truth" mentioned at different places when talking about "leaking website" but they just play a role to provide materials to build your own "truth". And that's the main reason why we need more "leaking website", you need to have measurable and observable results just like for a scientific experiment. Diversity is an important factor, not only in biology, but also when you want to build some "truth" based on leaked information. Even if the leaked information seems to be the same raw stream of bytes, the way to disclose it is already a method on interpretation (e.g. is it better to distribute to the journalists 4 weeks before? or is it better to provide a way for everyone to comment and analyze at the same time all the leaked information?). As there is no simple answer, the only way to improve is to try many techniques or approaches to find you by yourself what's the most appropriate.

What should we expect for next generation "leaking website"?

I don't really know but here is some thoughts based on reading from HN or some additional gathered from my physical and electronic readings.

  • Collaborative annotation of the leaked information (e.g. relying on the excellent free software Annotator from Open Knowledge Foundation)
  • Collaborative voting of leaked information, when you have a list of contributors annotating regularly, you may derive a list of voters for the information to leak or not information. This could be useful when there are doubts about sources for example.
  • Collaborative distribution. A lot of the existing site are based on a centralized platform sensible to easy shutdown. (an interesting HN discussion about distributed content for WikiLeaks where another free software like Git could play a role too.)

We are just at the beginning of a new age of information leakage that could be beneficial for our societies. But the only way to ensure the benefit, we have to promote a diversity and not a scarcity of those platforms.

Update 20101202: It seems that some former members of WikiLeaks? decided to make an alternative platform (source Spiegel). Diversity is king and especially for interpretation of leaked information.

Tags: freedom internet society transparency wikileaks

Syndicated 2010-11-28 11:43:53 (Updated 2010-12-02 20:08:19) from AdulauWikiDiary: RecentChanges

2010-10-24 Open Access Needs Free Software

La bibliothèque humaniste de Beatus Rhenanus / Humanist Library of Beatus Rhenanus

The "Open Access Movement" depends on Free Software

The past week was the Open Access Week to promote the open access to research publication and to encourage the academia to make this as a norm in scholarship and research. The movement is really important to ensure an adequate level of research innovation by easing the accessibility to the research papers. Especially to avoid editor locking where all the research publications when they are not easily accessible and you are forced to pay an outrageous price to just get access. I think Open Access is an inevitable way for scientific research in the future even if Nature (a non-Open Access publication) disagrees.

But there is an interesting paradox in the open access movement that need to be solved especially if it want to preserve their existence on the long run. The access must go further than just the access to the papers but to the infrastructure permitting the operation of open access. As an example, one of the major open access repository called arXiv where physics, chemistry or computer science open access papers are stored. arXiv had some funding difficulties in late 2009. What happens to those repositories when they run out of funding? A recent article in linuxFR.org about open access forgot to mention about the free software aspect of those repositories? Why even promoters of free software forget to mention about the need of free software infrastructure for open access repositories? Where is the software back-end of HAL (archive ouverte pluridisciplinaire) or arXiv.org?

Here is my call:

  • Open access repository must rely on free software to operate and to ensure long-term longevity
  • Open access repository must provide a weekly data set including submitted publications and the linked metadata to ensure independent replication (where community could help)

Open access is inspired by the free software movement but somehow forgets that its own existence is linked to free software. Next time, I see and I enjoy a new open access project in a specific scientific field. I will ask myself about their publication repository and its software.

Tags: freesoftware science openaccess research

Syndicated 2010-10-24 14:33:55 from AdulauWikiDiary: RecentChanges

2010-08-15 Free Software Is Beyond Companies

Chardon bleu (Echinops Ritro)

Free Software Is Beyond Companies

After the recent Oracle to dismiss their free software strategies, there is always this discussion about free software and its viability in large corporation. But I strongly believe that the question is not there. The question is not the compatibility of free software with large corporations or some business practices. What is so inherently different in free software is the ability to provide free/convivial1 "tools" (as described by Ivan Illich) for everyone including large corporation.

In the recent GNOME Census, a lot of news articles, show the large or small contribution of various companies. But the majority of contributions are still done by volunteers and some are paid by small or large corporation. This doesn't mean that the company behind the funding of the author is always informed of the contribution and that the company is doing that for the inner purpose of free software.

Another interesting fact is free software authors always tend to keep "their" free software with them when moving from one company to another one. Free software authors often use companies as a funding scheme for their free software interest. Obviously companies enjoyed that because they found a way to attract talented people to contribute directly/indirectly to the company interests. But when the mutual interest is going away, authors and companies are separating. It's usually when you see forks appearing or/and corporations playing different strategies (e.g. jumping into aggressive licensing or stopping their open technological strategy).

Is that bad or good for free software? I don't know but this generates a lot of vitality into the free ecosystem. Meaning that free software is still well alive and contributors keep working. But this clearly show the importance of copyright assignment (or independent author copyright) and to be sure that the assignment is always linked with the interest of the free society to keep the software free.

Associated reading (EN) : Ivan Illich, Tools For Conviviality

Lecture pour en savoir plus (FR) : La convivialité, Ivan Illich (ISBN 978-2020042598)

Tags: freedom freesociety society copyright freesoftware

Syndicated 2010-08-15 09:48:17 from AdulauWikiDiary: RecentChanges

19 Jun 2010 (updated 19 Jun 2010 at 11:16 UTC) »

2010-06-19 Searching Google From The Command Line

Bookseller in Rennes

(shortest path for) Searching Google From the Command Line

Looking at the recent announce from Google about their "Google Command Line Tool", this is nice but missing a clear functionality : searching Google… I found various software to do it but it's always relying on external software or libraries and not really the core Unix tools. Now can we do it but just using standard Unix tools? (beside "curl" but this can be even replaced by a telnet doing an HTTP request if required)

To search google from an API, you can use the AJAX interface to do the search (as the old Google search API is not defunct). The documentation of the interface is available but the output is JSON. JSON is nice for browser but again funky to parse on command line without using external tools like jsawk. But it's still a text output, this can be parsed by the wonderful awk (made in 1977, a good year)… At the end, this is just a file with comma separated value for each "key/value". After, you can through away the key and you display the value.

curl -s "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&start=0&rsz=large&q=foo" | sed -e 's/[{}]/''/g' | awk '{n=split($0,s,","); for (i=1; i<=n; i++) print s[i]}' | grep -E "("url"|"titleNoFormatting")" | sed -e s/\"//g | cut -d: -f2-

and the result output :

http://en.wikipedia.org/wiki/Foobar
Foobar - Wikipedia
http://en.wikipedia.org/wiki/Foo_Camp
Foo Camp - Wikipedia
http://www.foofighters.com/
Foo Fighters Music
...

Now you can put the search as a bash function or as an alias (you can replace foo by $1). Do we need more? I don't think beside a Leica M9…

Tags: google command-line unix internet search

Syndicated 2010-06-19 10:14:31 (Updated 2010-06-19 11:16:18) from AdulauWikiDiary: RecentChanges

24 May 2010 (updated 27 Jun 2010 at 10:17 UTC) »

2010-05-24 Information Wants To Be Free Is Becoming An Axiom

Northern Gannet (fou de Bassan) on Rouzic Island

"Information wants to be free" is now becoming an axiom

The last article "Saying information wants to be free does more harm than good" from Cory Doctorow on guardian.co.uk rings a bell to me. It seems that we still don't often understand what's the profound meaning of this mantra or expression is. One of the origin for this expression could be around the fifties from Peter Samson claimed : Information should be free.

When Steven Levy published his book : "Hackers, heroes of the computer revolution", the chapter "The Hacker Ethic" includes a section called "All information should be free" in reference to The Tech Model Railroad Club (TMRC) where Peter Samson was a member. The explanation made by Steven Levy:

The belief, sometimes taken unconditionally, that information should be free was a direct tribute to the way a splendid computer, or computer program, works: the binary bits moving in the most straightforward, logical path necessary to do their complex job. What was a computer but something which benefited from a free flow of information? If, say, the CPU found itself unable to get information from the input/output (I/O) devices, the whole system would collapse. In the hacker viewpoint, any system could benefit from that easy flow of information.

A variation of this mantra was made by Stewart Brand in a hacker conference in 1984 :

On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other

We could even assume that the modified mantra was a direct response to Steven Levy's book and to his chapter "The Hacker Ethic" (ref. mentioned in a documentary called "Hackers - Wizards of the Electronic Age"). The mantra or the aphorism was used in past twenty-five years by a large community. The application of the mantra by the GNU project is even mentioned in various documents including again the book from Steven Levy.

Regarding the last article from Cory Doctorow, why he doesn't want that make an emphasis on the information but on people's freedom. I agree to that point of view but the use of "information wants to be free" is a different matter. I want to take it on a different angle, information is not bound to physical properties like the physical objects are. By the effect of being liberated from the physical rules, information tends to be free.

Of course, this is not real axiom but it's not far away from being an axiom. If you are looking for the current issues in "cyberspace", this is always related to that inner effect of information. Have you seen all the unsuccessful attempts to make DRM (digital restrictions management or digital rights management depending of your political view) working? All attempts from the dying music industry to shut-down OpenBitTorrent or any open indexing services? or even the closing of newzbin where at the same time the source code and database leaked? or the inability to create technology to protect privacy (the techniques are not far away from the missing attempts done by DRM technologies)?

Yes, "information wants to be free", just by effect and we have to live with that fact. I personally think it is better to abuse this effect than trying to limit the effect. It's just like fighting against gravity on earth…

Tags: freedom information free freesoftware

Syndicated 2010-05-24 16:55:44 (Updated 2010-06-27 10:17:02) from AdulauWikiDiary: RecentChanges

109 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!