Risks in Running Open Source Projects

Posted 3 Feb 2002 at 21:54 UTC by vivekv Share This

What are the risks of running an Open Source Project? Every maintainer of an Open Source project will always have issues that she/he needs to mitigate all the time. what are the best way to do it?

Given the fact that the number of Open Source projects that succeed compared to the number undertaken is a very low number, there seems to be a big list of risks that all Open Source projects share that have to be mitigated by all open source developers

I am not looking at the classic open v.s. closed projects and the risks but taking it one step further to say that once an open source project has been decided upon, what are the ongoing risks that the people on the project handle on a regular basis?

Some of them that I can think of are

  1. Declining user base interest due to various reasons
  2. Not enough developers
  3. Not enough funding (hardware, software, hosting etc)
  4. Not enough product stability (not all software out there is Linux)
  5. Security risks
  6. Total cost of ownership is high since the software comes with poor documentation and is an operational nightmare.
I would like to hear the thoughts of other open source developers in terms of what are the risks and how they go about handling it. Please post your comments and thoughts. I plan to write a howto on this so that all other new Open Source development teams can benefit.

Risks of Failure vs. Chances of Success, posted 3 Feb 2002 at 22:20 UTC by egnor » (Journeyer)

I'm not sure what the difference is between "HOWTO mitigate risks of failure in your free software project" and "HOWTO succeed in your free software project". And there are plenty of the latter... right?

I suppose it could be a different approach; instead of "how do I succeed", you ask "how do I avoid failure". I'm not sure it's much more productive. Either way, the most successful free software comes about when people aren't thinking about "mitigating risks" and "attracting developers" and "managing total cost of ownership" but instead about writing great, useful , useable software. Do that, and everything else will follow.

Also, of course, risks of failure (and chances of success) depend greatly on what you're trying to do, what you consider "success".

Antipatterns vs. Patterns, posted 3 Feb 2002 at 23:45 UTC by logic » (Journeyer)

egnor: The difference is really the difference between the idea of seeking out patterns in how you approach a problem (such as seen in Design Patterns) and being wary of patterns which usually signal a problem or dysfunction (ala Antipatterns). Both approaches are valuable, but the former (positive patterns) almost always receives an over-abundance of treatment (as you mentioned yourself). The latter (antipatterns) are very important in recognizing the warning signs of a failure-in-the-making, so you can nip it early. I'd consider Antipatterns, or at least a related book, required reading to understand why you'd want to approach thinking about the problem like this.

And ignoring these issues, as you suggest, is a limited view. Not everyone is working on an open source project for their own amusement or gratification; today, it is becoming more prevalent for a traditional development organization to take on open source projects, to realize some of the benefits of the model, or just because they're making use of a piece of open source software in their ongoing work. These organizations obviously should be focusing on writing good software, but the most successful of them will also be spending time looking at issues like this (in addition to all of the other issues facing a software business) so that they last at least as long as the usable life of their software.

Confessions of an Open Source Consumer, posted 4 Feb 2002 at 03:04 UTC by garym » (Master)

Having spend over a decade recommending and deploying open source components in applications ranging from military robotics to websites, I may have a twisted perspective on this issue; I don't think any of the issues are intrinsic to the open source products but are all part of all software risk management. In my experience, open source may have higher crisis rates, but greater chances for recovery. Let's see if I can explain that ...

Consider our own website; it's a simple blog-like site that presents our hotlists in a DMOZ sort of hierarchy. The constraints were [1] no database server and (related to that) [2] no special installed packages -- the goal was a portal site that could be done with a very basic webhost. We chose the sips.sourceforge.net as the base of our news/links system and modified it to suit our needs.

SIPS quickly fell apart as a project. The code was not extensible, HTML was hardcoded, and there was no community. Within weeks it was obvious we were on our own. That was 3 or 4 years ago, and our site still runs SIPS, but modified to merge with other packages; because it was open source, the failure of the "vendor" did not impact us -- we knew the dangers, accepted that risk, and while we'd love to replace it with something of a less anti-pattern design, it works and for a price we can afford.

Another example: GSP, later called GNU Server Pages, was a pre-cursor to JSP, or more precisely, while Jakarta was stalled in legal limbo-land, our clients needed that sort of facility and JSP did not exist in any useful/affordable package. We deployed several large web services in GSP, worked with its authors to fix the NES support, and everyone was happy. When Jakarta started to produce useful releases, most every developer abandoned GSP, so again, we were on our own, but because it was open source, this did no matter; we simply carried on, fixing and extended the code as required, and the last of these GSP applications was only retired last August.

Vendors vanish. That much is a fact of life. When a proprietary vendor vanishes, the code vanishes and the applications are screwed. When open source teams disband, no one really notices.

A second cited risk was the lack of documentation. Bad or missing/misleading documentation is our single most expensive obstacle we find in deploying any new technology. Where dev teams are co-operative and friendly, and where the mailing list archives are publically accessible and searchable via Google, documents are almost never missed, but in other cases? Recently we were given the old cop-out "Read the source!" ... I did a quick scan with some reverse-engineering tools and discovered the project contained 44,000 lines of code. Cool. So, reading continuously for a straight week, I might retain enough to avoid stupid mistakes and make sense of a few developer assumptions. In the Real World? Not a chance, we'll buy elsewhere. Face up to it: "Read the source" means "Reverse Engineer it" and that's an expensive proposition when people are paid by the hour.

The solution here is so simple: If you manage an open source project, the most valuable thing you get from your user community, second to a good bug report, is a question. Collect them all, and if you get FAQs, just do a htdig of your emails and cut and paste! Proprietary software almost never does this, open source are mostly pretty good about it.

One big risk of open source is rooted in open source development methods, and a good solution is non-trivial: Change. In almost every commercial deployment of an open source component, we have hit a wall where newer versions of the software are not compatible with the installed system, and the earlier version is no longer supported. Effectively, this is the same as the "vendor vanishes" problem, so the strategy is the same: You stick with what you have and forget about the vendor.

Rapid changes can also be problematic, especially if those changes are critical to the software function. I've seen projects produce minor-number revisions so dramatically different from the prior release that you are thrown back to square one in reverse engineering it. This cost is, on our experience, the most common cause of abandoning an otherwise promising deployment.

What is failure?, posted 4 Feb 2002 at 15:27 UTC by chexum » (Master)

As egnor stated, define success first, and also garym said, "Vendors vanish... No one really notices."

You can have a successful project, easing lifes of thousands out there, without any progress on "your" code. In a sense that might be a difference between Open Source and Free Software. In the latter, you are satisfied to know there are happy people using your code, with Open Source, you are in a position where you still want to make a living of it. Do you?

In the Wiki world there are a few tricks to "trap" a few users into contributing more than they would to if your software is perfect. See Wikipedia: Rules to consider, especially "Always leave something undone". But the rest might apply too.

Consider also, that depending on the nature of your project, the "target" segment (blech, marketing) might not be responsive. That might be the case with a specialized SOCKS proxy, like Dante. They also had a survey with a former version, and seemed to come out with a concept of proprietary modules. How successful, only they know, but I just hope the best for them. There are always people who can be regarded as freeloaders, only caring for the free stuff, there are also people who like to help you (still freely), and there are the people who like to buy something from you. It's your choice who to alienate or help keeping near your stuff. I think each class is important, and probably not the third is the most important...

You haven't gotten into the legal risks, posted 4 Feb 2002 at 18:12 UTC by jbuck » (Master)

Oh, come on. You asked for risks, and people are talking about just whether the project will succeed or not. Much bigger things could go wrong. While what I'm talking about are low-probability events, they can't be ignored.

Some contributor to your project may send you something that he has no right to give you, especially if he works as a programmer in the US. In the worst case, his employer could sue the person running the project for financial damages; at minimum, the tainted code might need to be ripped out and a lot of work would have to be restarted. The FSF avoids this problem by asking for assignments and employer disclaimers.

If the project involves reverse engineering, there are legal risks. If it isn't done the right way, you could be sued. If a copy protection scheme is involved, there's the DMCA and possible jail. This doesn't mean that it can't be done, but it might pay to seek advice before starting the project if it seems risky.

And then there are patents. Just because a patent exists doesn't mean that it's valid or applies to your project, but this is a real threat. Prior art is a defense, so if you can find a technique that people were using before the filing date, you can use that. When it's possible, it's better to find a way to go around the patents rather than to just ignore them (e.g. Ogg vs MP3).

Finally, it's not correct to think that you're safe because you live in a country with more enlightened policies, as all governments are under pressure to submit to Intellectual Property World. Even if you don't break any laws of your own country, they may get you busted anyway. Ask Jon Johansen.

garym said..., posted 5 Feb 2002 at 19:26 UTC by julesh » (Master)

"Face up to it: "Read the source" means "Reverse Engineer it" and that's an expensive proposition when people are paid by the hour. "

While I agree in many cases, there are also many cases where "Read the source" doesn't mean "Reverse Engineer it". For many questions, a simple glance at the source of an application / library will suffice.

A recent case that happened to me involved sendmail. Sendmail is a piece of software that I've always considered a little bit scary and difficult to use, and as I rarely have to tamper with the settings in it I've never bought the O'Reilly Sendmail book, so my documentation on it is fairly poor (that book is probably the only decent documentation that exists for it).

However, when I received an e-mail that was refused delivery because its headers were too large, I couldn't find an answer by browsing through any of the documentation I had. Following a suggestion from a friend, I downloaded the source code which I had never looked at, and found the location of the code that produced the error message that was occurring. From this, it was trivial to work back to other locations in the program that referenced the same set of variables, and within 5 minutes I had the solution to the problem.

OK, it doesn't always work out like that, but there are many times it does. I've also been glad of the source code for various libraries that I have used in my time, notably Borland's ObjectWindows, which while a proprietary library did have source code available, which helped me work around many of the quirks of that library last time I coded a project in it.

Risks - summing up..., posted 6 Feb 2002 at 13:33 UTC by vivekv » (Journeyer)

I think egnor himself defined success quite well. He said "writing great, useful , useable software". Now my original question was that what are the risks that will prevent such an event from happening? From the post by garym it seems that documentation is a key item in the list. Also the availability of source code helps but not always "read the source equals reverse engineer". The second key thing is that managing subsequent successfully i.e. do not upset the users by not giving them a good "upgrade path". Both these points talk about keeping the "user happy" which is a good thing

Now, these are not the only risks. We have the other low probability items like copyright violations/DMCA that should also be mitigated. Now, many Open Source projects in my opinion choose to ignore these risks which is not a good thing but they dont have the resources to verify the legality of submissions. In many cases the risk is to the author and not to the project itself.

In commercial software houses, people get paid to write software so the luxury of slowly understanding the software by looking at its guts is not available. Thus keeping the "user" happy in this case would be to provide good documentation, quality software that is usable. These would be the mitigation plans for various risks related to quality, design and usability.

Open source doesn't mean non-professional, posted 6 Feb 2002 at 18:48 UTC by niksilver » (Journeyer)

Two risks that come to mind are:

  • Don't assume a workforce you haven't got. Shalomif said it well with this "rule": When a developer says he will work on something, he or she means "maybe". Just because a thousand people could work on your code, it doesn't follow that a single one will.
  • Choose the right licence. This may be a risk to some people, if you care what others do to your code. Maybe a GNU licence is good. Maybe a GNU licence is bad.

Then, if you're worried about the usual software problems (schedule, etc) then it's the usual software risks you run. Open source, or even free, doesn't mean you're non-professional or non-commercial. Open source is often a business decision, and as such it's only part of the picture.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page