Older blog entries for zanee (starting at number 191)

17 Apr 2010 (updated 17 Apr 2010 at 19:29 UTC) »

Hit&Run; patching

I keep coming across patches from random individuals on these here tubes with names like anonymous and NONAME. Posted to sourceforge and other places, why in Gods name sourceforge still allows this is beyond me. Sending random patches to an author or maintainer that you have a working relationship with is one thing. For instance if I receive a patch from lets say Daniel Lai for something. I've been friends with that kid since elementary school I'm aware of his mentality and don't really need to be concerned about patch application. Do you know Daniel Lai? Probably not. Do I know him? Yes. Do I know you? No. Technically, contacting through Sourceforge or some other online vcs repository would generally work if people actually checked their email they registered with. Others don't do this and I have just coined a term for these type of people, Hit&Run patchers.

A Hit&Run patcher is someone who sends you a patch with no relevant information about the patch. Usually by email but also by just posting a patch to a project website somewhere in a corner with no other discernable information than the patch itself. If one is lucky an obscure username like 'asdf1234' or 'Anonymous' is left with the relevant patch. Usually these are small patches but every once in a while a large patch is left. This is usually a bad thing as then you the maintainer of the project have to apply the patch. Summarily then figuring out what exactly it is doing. 9 times out of 10 it's formatted incorrectly and needs various correcting. If you are reading this definition you've probably received a Hit&Run patch. You should feel free to put it on the back-burner and attend to those patches that have been properly described.

Being a Hit&Run patcher is akin to someone slapping you in the face rudely with an answer to a question you've simply never asked. "*SLAP* Here's an answer bitch, 42, try figuring out what the question is.. SUCKKKERRRR". It's just wholly unacceptable social behavior and should be tolerated by no one!

So, yes. I have done this myself, fixed something and just send the patch from some random internet cafe in the furthest most inhabited island-right in the Atlantic Ocean. Normally to whatever email that resembles maintainer or author. I will flog myself later. This behavior however is unacceptable! Certainly practicing being a Hit&Run patcher means I should not expect my patch to be applied. Or if it is applied I should expect it will take more time than is reasonable as the maintainer tries to figure out the question to his/her life, the universe and then this anonymous patch. Really, my only complaint should be with myself as certainly this would be fine. Well, if not for me wanting my patch applied to trunk.

So what have we learned? When one submits a patch as anonymous it is simply a really bad way to go about getting said patch accepted. There are a couple of things that are just formally required for kind, civilized society to continue existence. These things primarily for tracking, accreditation and communication purposes.

  1. A working valid email address. In-case one needs to be contacted about the patch or commentary.
  2. A valid name. Even if it's just a handle; so the author or maintainer may attribute credit for said patch

Everything else can be negotiated but at the very least the above two items are absolutely required. What would help even more is discussing your patch and what it does in a sane rational manner. Something along the lines of "This changes XYZ in X function/class/method/whatever and achieves X functionality". Also, why you feel your patch should be applied as in "This will help users in this corner case for which no current functionality exist" or "This fixes a bug or security hole based on the samples that I have provided". All that said it makes the life of the maintainer or author of the package so much more pleasant! When the patch comes with the above material it's simple to look at the code or patch and say. "This person knows what they are doing, they've provided a valid patch that adheres to the functionality described and it fits within the guidelines of the project; they've provided all needed facets. I've spoken with them, there's nothing left to do but apply the patch for cut in a new release". Anything else is barbaric.

Next time you come across a Hit&Run patcher feel free to relay them this link in whatever forum they've posted their patch.

Share/Bookmark

Syndicated 2010-04-17 18:58:51 from Christopher Warner » Advogato

Relations and FacultyStaffDirectory for Plone4

Here is a patch to get Relations working with Plone4.
Here is a patch to get FacultyStaffDirectory working with Plone4. You will need to install Kupu; whenever I get around to updating it to TinyMce i'll provide a patch for that. In the meantime let me know if there are any problems so I can update. Cheers!

[1]: http://dl.dropbox.com/u/5510475/relations.plone4.patch
[2]: http://pypi.python.org/pypi/Products.Relations
[3]: http://dl.dropbox.com/u/5510475/facultystaffdirectory.plone4.patch
[4]: http://plone.org/products/faculty-staff-directory/releases/2.1.3

Share/Bookmark

Syndicated 2010-04-12 20:53:55 from Christopher Warner » Advogato

Products.membrane patch

If you are using Plone4 and need membrane you are probably aware that it doesn't work. Here's a patch that fixes it, please vet. If you have problems please let me know. This is already sitting in the ticket system for review.

[1]: http://plone.org/products/membrane
[2]: http://dl.dropbox.com/u/5510475/membrane.patch

Share/Bookmark

Syndicated 2010-04-09 21:00:35 from Christopher Warner » Advogato

JOB: Network Operations Administrator

Preferred skills in troubleshooting T1 circuits, DSL circuits, firm understanding of TCP/IP, subnetting, comfortable at the command line (linux, cisco). Should also understand what and how webhosting/mailhosting is deployed. Must speak clearly and have good writing skills. Flexible hours (including overnight) and be able to work some weekends and sometimes carry an on-call phone. There are also two open positions for two field techs that require; Must know wiring (cat5,6,3/coaxial/fiber), must be comfortable configuring routers and CPE's (this is trainable), must work fast and speak clearly (they will be dealing with our clients directly)

You'll be working for a guy I consider my brother so it's legit and I can vouch for him. Contact me and i'll send you the email; Please reference me in the subject line and in your resume somewhere. This is only in NYC and I must know you, have worked with you or can vouch that whoever you're recommending is just as badass as you yourself.

Thanks..

Share/Bookmark

Syndicated 2010-03-25 02:42:35 from Christopher Warner » Advogato

PyCurl maintenance

I'll be doing some PyCurl maintenance and preparing for a release sometime in the near future. First initial release will just be cleanups all around so this would be a good time for any concerned party to get me those small patches I've seen floating about. If you are a distro packager with patches you've been applying in your builds that you think should have been upstream, please contact me. I'll try to hunt as much of these down as I can myself, with the help of some interns.  As a note my employer NYU Institute for the Study of the Ancient World  has been very gracious in allowing me the time to do so because I don't really have any as it is.

Also, whoever wrote a patch for Python 3.0 support  Unfortunately, you posted this patch as anonymous and I will not commit without discussion from the person who wrote it. If I don't see anything I will reimplement Python 3.0 support myself. I'd really hate to have to redo a chunk of the work when a readily acceptable patch may already exist. So please contact me! I'd prefer an email to my kernelcode.com email but sourceforge is ok as a communication method as well.

I also need to update some stale links on the pycurl website and such which I will try to get around to asap. In regards to windows installation of PyCurl i've seen some issues there and will try and duplicate but as I don't readily use Windows on a daily basis it's going to be touch-and-go in a VM for a little bit. If you can explicitly state the problems and issues that you are having there, it will help me immensely in shoring that up.

Share/Bookmark

Syndicated 2010-03-25 01:07:01 from Christopher Warner » Advogato

China has banned this site.

It seems that if you are anyway affiliated with a government site, have been in some form of military service or just in general speak about anything the chinese government doesn't like. They will have your site banned from popular search engines in China. This is a known-known in the net community and is pretty much tolerated for the most part. That said, most of the information posted here is syndicated several places and there have been articles that I have written or published that would allow dissidents to have a voice. This includes ease dropping on fiber optic signals and other things that were at one point heavily linked from Chinese websites and forums. That and the proper way to race your vehicle in the event you planned to do it illegally on regular road ways. The last laced with warning after warning as why doing such a thing is stupid but nonetheless.

Let me preface that I have nothing against China as a country and never have but banning me from Baidu isn't going to stop your citizens from reading my site. So removing it from Baidu's index is not going to be effective enough. My site is syndicated to probably 4 or 5 different places; you'll have to ban all of those (which from the looks of it are fine) and considering I've barely said anything egregious it's just funny to me. I tried to re-add my links to the index and you went even further in removing nearly all links completely in the Baidu index. I'm not sure if this is because of the recent Google tiff or what and i'm not sure how many others you have done this to. In all sincereness, it doesn't even matter. Anyone who reads this site doesn't have to waste time searching baidu or trying to figure out how to punch through your firewall.

What really gets me is that if you are so right, if your ideas are so wholesome and virtuous. If you are doing what is right for China; why would you need a firewall? Wouldn't your values win out by their sheer existence? By the sheer simple truth of what you are saying? So I have a message for the Chinese government.

At the end of this day, this battle against information you seem to be waging will fail. Unless you wall off China from the internet completely... You will simply not win, it's just human nature to search for truth. Good luck.

That said, if you are a citizen, in whatever country you reside, with the freedom to search for truth you should probably not support search engines or internet companies who still operate in China.

Share/Bookmark

Syndicated 2010-03-24 16:04:21 from Christopher Warner » Advogato

Pycurl on OSX

If you're trying to install pycurl 7.19.0 on OSX specifically (Darwin Kernel Version 10.2.0: Tue Nov  3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386) and are wondering why exactly it is failing. It's because setup calls curl-config which calls the static version of libcurl. Part of the problem here is that Apple's build of curl still references libcurl.a but they have seemingly removed the static library from 10.6.2. It's not stubbed or replaced by anything so the install fails. Another part of the problem is that pycurl is looking for a static library. I'm not going to get into a discussion on why using static libraries are nearly useless in most cases (even in embedded programming); except to say, static libraries are nearly useless in most nearly all cases; unless you know for sure you'll never, ever, ever, ever, ever until human kind has ended be upgrading the library. If you have that sort of vision and insight; i'd like for you to pick out some lotto numbers for me.

To fix replace lines 99-101 in setup.py with the following: libs = split_quoted(os.popen("'%s' --libs" % CURL_CONFIG).read())
I've reported to both Apple and the Pycurl mailing list.

[1]: http://pycurl.sourceforge.net/
[2]: http://people.redhat.com/drepper/no_static_linking.html

Share/Bookmark

Syndicated 2010-03-16 16:42:15 from Christopher Warner » Advogato

Ontology about prehistoric archaeology sites

Dr. Roxana María Danger Mercaderes wrote a PhD thesis titled loosely "Extraction and analysis of information from a web semantic perspective". Unfortunately my espanol is extremely poor. However, I need to read the thesis so I translated via Google but it dies on page 41. Essentially she comes up with an ontology for prehistoric archaeological sites.

If the thesis is sound; which means I could finish reading it and this ontology is practical I'd like to use this as the basis for a conversion into an actual sparql-endpoint/content type that I could readily apply in a real world context. If anyone could help me translate this ontology into an english translation that would be very helpful. I guess I could spend the time doing it myself and brushing up my Spanish; that would be much slower mind you.

[1]: http://krono.act.uji.es/PhDs/RoxPhD/view
[2]: http://krono.act.uji.es/Links/ontologies/Archaeology.zip/view

Share/Bookmark

Syndicated 2010-03-10 20:43:57 from Christopher Warner » Advogato

The Archaeology Program

This NPS online course has been invaluable to me in learning more about archaeology. I am by no means an archaeologist but it's extremely appalling to me how much the act of curation is lacking and how little structured data is available. One would believe the acquisition of many data facets would be a complete must have. I am quickly learning that is not so. My personal findings are that there are no shortage of trouble trying to do research on the proper way to manage archaeological data. There is no one true standard or method into data collection.  Suffice to say in my own opinion after about a good 6 weeks of reading is that there should be; part of the problem is I suppose laziness and lack of due diligence out in the field. Obviously, I mean that as no slight to archaeologist in the dirt hunting for treasure. It's truly no small task to properly take the GPS coordinates, name, marker, color, harris matrix data, photographs, etc etc of a dig/site when you have many other things to do; including actually researching whatever has been found. Compounded with the lack of appropriate funding and time constraints it begins to get very difficult. That said, the other side of the coin is then taking that curated data and exposing it in easily accessible public fashion; there is simply no shortage of halfway, broken implementations here that don't fully expose the actual object that we are speaking of.  Most of what I've found is exposing the object for display or conceptually; which while needed for a museum, doesn't actually concentrate solely on the object. In theory curation at the exhibition level I'm finding is more of a on a case by case basis. The specific object and site data should be available in such fashion that any museum curator would be able to pull down whatever specific data they wanted on the find or even just a specific object in a whole find.

So, how does this affect me? Well, I want that data so I can do web semantic "stuff" with it. Actually, I wouldn't be doing all this background research if the data existed in a unified way so I could play with it. I want to mine the data from a specific archaeological site and be able to find relations that I would otherwise not know exist. I then want to make observations on that data, well, actually I don't want to make observations myself. I'll leave that to the serious archaeologist but I would like to be able to make that research possible and play with web data research, based on that. Seeing as I have some of the foremost researchers at my finger tips this will help me immensely. This originally started when I began implementing an ontology content type for Plone which would allow an ontology to be built describing whatever anyone wanted. After reading a couple of thesis papers on changing ontologies and the management problems there. I went further down the rabbit hole and simply hit a wall where I realized I was trying to solve the wrong problem. This can't be the right approach in-fact, that is the problem right now. Everyone is trying to be the "authority" on the conceptual understanding of the data in digital form. Allowing everyone to create their own ontology isn't solving anything. I'm currently rethinking the whole idea. My new approach involves finding a proper solution for a solid foundation of data from a site/dig. To conceptually understand the data is nice but unless you have a solid foundation of data it's inherently pointless. The idea is to leave the understanding to a requisite professional but give them the ability to see the data in ways they never have before! The only way to do this is to give them a solid foundation of data and then building conceptual tools separately for each project or theory. Or in short, bioinformatics for archaeology.

The first issue of laziness collecting the data or just the sheer excess of data isn't easily solvable I suspect. One can attempt to make it easier by providing a starting point for the actual object however. Every find; every item, should have a requisite photo(s). Let's face it; in archaeology you are dealing with real world 3d objects. It's not a formula, or conceptual understanding of something so much as it is an object in front of you. That you can hold, feel, touch, smell and because it's a real world representation of an object; it should have a photo. From that photo we should be able to extrapolate and acquire GPS coordinates based on the location of the object and then as much facets as we can possibly conceive of. Any measurements should be explicitly metric and all the facets possible should be available. Sounds easy but I clearly have to do much more research here to see what's feasible. In my case I plan on using my apartment as an archaeological dig site and classifiying all of the objects that I find.

The second issue of exposing the data I will do via web semantics. So all of my data should be available in a structured but query-able fashion. I should then be able to pull all that data together and create a slideshow or exhibition with the proper curation. This all sounds easy to me right now, in my head, but we'll see what stumps I hit as I try to implement this; of course, in Plone :-)

[1]: http://www.nps.gov/archeology/tools/distlearn.htm

Share/Bookmark

Syndicated 2010-03-03 03:15:57 from Christopher Warner » Advogato

IPV6 is a failure – stop wasting everyones time.

Today I can't reach pypi.python.org. Why? Well because of this:

warnerc01:~ cwarner$ telnet pypi.python.org 80
Trying 2001:888:2000:d::a3...
telnet: connect to address 2001:888:2000:d::a3: Host is down
Trying 82.94.164.163...
Connected to ximinez.python.org.
Escape character is '^]'.

What is this horse shit you maybe asking? Well the 2001:888:2000:d::a3 is an ipv6 address and it's looking kind of like, it's unavailable. However after trying the ipv6 address we fall back to the ipv4 address and it WORKS. Why is this a problem? Well telnet is great because it falls back but other programs aren't as capable. Which means they don't fallback, which means I believe pypi.python.org to be completely unavailable. It's not though because we see a connect fine to 82.94.164.163 which is an IPV4 address. Why should we have to fallback anyway? None of this makes any sense.

Daniel J Bernstein, says it best:

It gets worse. The IPv6 designers don't have a transition plan. They've taken some helpful steps, but they typically declare success (``IPv6 support'') when the real problem---making public IPv6 addresses work just as well as public IPv4 addresses---still hasn't been solved.

You can read more here at "The IPV6 mess" which describes the problem in a rational, coherent and logical manner as to why IPV6 is a failure and wasting everyones time.

[1]: http://cr.yp.to/djbdns/ipv6mess.html

Share/Bookmark

Syndicated 2010-02-25 17:38:19 from Christopher Warner » Advogato

182 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!