Recent blog entries for Chicago

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


Syndicated 2014-07-10 13:18:17 from jejt / jmons

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


Syndicated 2014-07-10 13:15:53 from jejt / jmons

21 Jul 2012 (updated 30 Apr 2013 at 12:57 UTC) »

Now Your All Dreams Will Going To Become Reality with Your Own Home Business

Post removed as was spam through aggregator.

Syndicated 2012-07-21 02:55:23 from jejt / jmons

Continual Integration Development and the Cloud

One of the big buzz phrases at the moment seems to be Continual Integration Development. If you’re developing and wanting to deploy ‘as the features are ready’, and you have a cloud you have two main options, which both have pros and con’s but:

New Image per Milestone

Most cloud systems work by you taking an ‘image’ of a pre-setup machine, then booting new instances of these images. Each time you get to a milestone, you setup a new image, and then setup your auto scaling system to launch these instances rather then the old one, but you have to shut down all your old images and bring them up as new ones.

Pro’s: The machines come up in the new state quickly.
Con’s: For each deployment, you have to do quite a bit more work making the new image. Each deployment requires shutting down all the old images and bringing up new replacements.

SCM Pull on Boot

Make one image, and give it access to your SCM (i.e. git / svn etc). Build in a boot process that brings up the service but also fetches the most recent copy of the ‘live’ branch.

Pro’s: You save a lot of time in deployments – deployments are triggered by people committing to the live branch, rather then by system administrators performing the deployments. Because they are running SCM, updating all the currently running images is as simple as just running the fetch procedure again.
Con’s: You do need to maintain two branches: a live and a dev branch, and merge (some SCM’s might not like this). Also, your SCM hosting has to be able to cope with when you get loads (i.e. when new computers get added). Your machines come up a little slower as they have to do the fetch before they are usable.

I opted for the second route: we use Git, so we can clone quickly to the right branch. We’ve also added in git hooks that make sure any setup procedures (copying the right settings file in) are done when the computer comes up. Combining this with a fabric script to update all the currently running boxes is a dream.


Syndicated 2011-03-20 11:10:17 from jejt / jmons

What Cloud Computing System to Use?

So you’re sitting at work, and you have to build a new system, and for once you don’t have any previous code or language forcing you to write one way or another, and you know this is going to get big – maybe not Twitter or Google big, but certainly big enough to give you a good old headache. The big question becomes what technology to use. Firstly, I apologise that this is early 2011, so if you’re reading this in two or three years (or even six months) then the technology will all change again – I’m not planning on updating this particular post as the stuff changes, but I might make new ones.

So the first decision to make is what Cloud Computing system you’re going to use – are you doing lots of queries, or just a few queries and lots of processing. I’m presuming the first one, but the latter one is quite interesting – it deals with universities and researchers trying to run on massive data sets and producing reports.

Your main contenders are:

  • Some collection of *nux or Windows servers
  • Proprietary cloud compute services

The first category might mean more work for you and your sysadmins – it really does point towards the requirements of a sysadmin but gives you a lot more flexibility in your choice of languages and systems, where as the latter might mean that you are able to do with out those, and also (depending on the service) have access to a lot of tools and power without having to use any other third party services.

The main propietary services at the moment seem to be:

  • Google’s Apps
  • Microsoft Azure

Now – both of these platforms are quite seductive, they have a lot of benefits – mainly that you don’t have to be a sysadmin to deploy and maintain the system, that you can access quite complicated things such as shared and persistent storage, caching and database pooling without having to actually spend days reading various manuals for everything.

The downside though? You are locked to one provider and their billing methods – Apps has a very strange billing mechanism to do with the number of users (Which if you’re producing something for a lot of users, that might be very expensive), and because you’re locked into that service provider, there isn’t another provider who you can goto for alternative pricing, and because of this, I think a lot of smaller businesses make a commercial decision to go with the more traditional style hosting.

Traditional clusters (such as provided by Amazon and Rackspace) seem to provide a collection of tools along side. Content distribution networks for static content such as images and javascipt, as well as tools for monitoring and automatically scaling the systems. The advantages of the traditional route is that its easy to run up a local copy at your location and develop away, which means when you are looking at doing architectural changes, these are much easier to stage to live.

In my opinion, it seems that commercial reasons for going down cloud hosting of systems which are ‘traditional linux/windows’ boxes have massive advantages, but do require more systems-administration work.


Syndicated 2011-03-20 10:05:01 from jejt / jmons

Hissing Noise from Speakers Fixed


So recently a friend at work (the esteemed DJ Hedflux) had a problem with his speakers – he has a pair of powered speakers, and he outputs sound down a normal stereo cable into the speakers. Being a professional DJ, nothing in his system is particually cheap (he’s not spending £100 on cables, but he’s not buying £1.50 speaker modules from the market) so we can quickly rule out shoddy work and bad connections inside of the devices.

Anyways, he worked out that it was a combination of his computer and his touch lamp causing it – doing things on his computer that required lots of processing, and having the dimmer lamp “dimmed” caused the noise to appear and dissapear.

This is very common, and also, very easy to fix.

How to fix:

Ferrite ring filter

The solution is very simple, and very cheap. You’ll need a ferrite ring, which you wind the audio cable in just a couple of times, like in the above picture.

Maplins sell these for a couple of pounds, usually find them in the Radio section:

http://www.maplin.co.uk/Module.aspx?ModuleNo=29788

Image stolen from http://www.gbrcaa.org/ntoa/Filters,%20Chokes%20and%20OIs.htm on which site you can also find more information about other kinds of chokes.

The science:

In Hedflux’s case, its possible that his computer internals arn’t all grounded to the case correctly, or the case to the power supply, so its generating more electrical noise then ideal. The dimmer lamp however is a common source of Electromagnetic Interferance (EMI).

Because he’s using a set of powered speakers, the loudspeaker cable is acting as an antenna, and the amplification circuit is having a side effect of acting as a little radio, which is picking up the EMI from the computer and the lamp, and turning it into the annoying hissing noise.

The ferrite bands that he added to the audio cable basically change the frequencies of that cable, effectivly filtering the annoying frequencies out.

Why don’t these cables come with the bands installed? Well, the point is that it dosn’t STOP the frequencies, it shifts the resonance frequency of the cable. The cable and speakers will still produce noise if the EMI comes in on a different frequency, so it was just bad luck that the EMI in his room was the same frequency as the cable / speaker setup.

Syndicated 2009-09-10 19:25:39 from Holding the Soldering Iron by the Cold End

Mud/Mush/Moo


So I’ve started on a tinymush server, and its quite interesting scripting objects together, so here is my first object (which is actually version two of it) because the first version was a little cumbersome. Please note this is ready for copy and pasting into the mush (I’ve escaped the ; with backslashes)

@create Board
@desc Board=An oak framed chalk board ready to be written on. It contains the words of wisdom from the teachers, or doodles from the students.[ifelse([hasattr(me,text)],It currently reads: [eval(me,text)],It is currently blank)]. Feel free to "write on board z" or "erase board"
@lock Board==me
@set BOARD = COMMANDS
&C_WRITE Board=$write on board *:&text me=%0\;pose has just been written on
&C_ERASE Board=$erase board:&text me\;pose causes clouds of dust to rise as it is erased

It has two commands “write on board x” and “erase board” which reset an attribute &text. I suspect I should change the pose lines to emit lines. But apart from that, I think its pretty cool.

Syndicated 2009-06-25 08:47:47 from Holding the Soldering Iron by the Cold End

Platform independant Distributed Stuff


Ok, so theres this massive need I’ve found for good distributed bug tracking which is actually platform independent. And this brings me around to my gripe of the week. Platform Independence.

Firstly, I classify myself as an indie developer - that means that I use open source and free software tools in my tool chains, and I use developer / free to use sdk’s to develop applications.

So I’m not able to afford any of the larger programs and tools which are used - things like the Visual Studio or the Team system. Things like Bug Management and Tracking, things like support and ticketing. To some extent things like release managment.

And why would I want to? There are plenty of open source, free, and community based tools out there which do of these things. The problem is however is that they often don’t work across all platforms. They often have requirements or don’t “fit together properly”.

Last night I found a quote by Doug Vargas (I dont know who the guys is or I would have linked to it):

“It’s easy to cry ‘bug’ when the truth is that you’ve got a complex system and sometimes it takes a while to get all the components to co-exist peacefully”

So with this in mind, this is where things begin to get a bit more tricky. The first problem is that I don’t work on one project - I don’t have a single project which is “my baby” I dont have a single passion in life which takes my complete and utter concentration, instead I have a few - simple single use tools which I reuse in other projects. I have a collection of libraries and tasks and I treat each one as a new project and I keep them seperate.

So when it comes to using some of these tools - they are really only for one project. For instance, trac - awesome huh? But I dont want to have to configure one for every application that I write. And this is where some of the harder hitting application come in, but there really isn’t much for the small indie people.

Then we find that half these tools only work in linux. Or windows. Or they work in Cygwin and they require some bizaare libraries. Its very easy to write a tool which does what *you* want and then not worry about everyone else - in fact this is probably the only way that the tool gets written.

So I’m starting work on a platform-independant distributed bug tracking app (this is because I want to track bugs inside of the projects) and I’ve decided to write it in C# which means it will be dependant upon Mono or .Net being installed on the target platform.

I’ve worked out that I need a scriptable level inbetween my application and the SCM - because I want to make it independant of the SCM I am at the moment tying it into Git, however I can see how I need to make it indepenant by having a script wrapper around the SCM (which might be Git or might be SVN or something like that) which would provide the answers to a collection of questions that I want to put to it.

Its getting things to coexist peacfully which I think is the big Programming problem.

Syndicated 2009-04-07 11:32:08 from Holding the Soldering Iron by the Cold End

SDKs and what that means


I’m having a real problem understanding the idea of an SDK. The entire concept is becoming more and more spleurgh (if you don’t mind my making up words). The concept makes sense when tit ties in with a particular IDE. And here’s the problem - why should anything tie into an IDE? The advantage of using something like Ant is that it allows you to build both inside an IDE and outside.

I built a “build bot” a while ago which took its own parameters and called csc.exe from the command line to help do building - because my knowledge was small, it seems that I had missed the point of msbuild. This takes the same project files that Visual Studio produces to do the same builds.

The only problem I have now is that I don’t know if there is a mono alternative to msbuild. If there is, that would be awesome, becuase I can define the build just as I would a java ant build, and know that it would work no matter what kind of system you where going to build for.

So is a SDK actually more then libraries? When it comes to mobile development, they seem to be libraries, maybe some tools which integrate into a particular IDE (for example Eclipse) and maybe also an emulator or test tools of some kind. Perhaps also, with an SDK is just a marketing thing - that you expect with an SDK a higher level of support. 

Why am I writing this? Well, I’m currently struggling with the Windows SDK for .Net 3.5 - I’m having massive problems with it and its compatibility with the Visual Studio C# Express. It would be nicer if instead of complex SDK installers (and from my experience, it is the installer which is the problem) they had Library versions or just “unzippable” folders to get the tools out. 

Anyways. Windows SDK and Visual Studio do not like to be installed in that order - Visual Studio first, THEN the Windows SDK (or at least, if you’re using Express editions because you’re an indie development like me)

Syndicated 2009-04-01 10:01:29 from Holding the Soldering Iron by the Cold End

EAGLE CAD and Valves


Uhh.. ok. The biggest problem I have with EAGLE Cad (and I only have the freeware version here - my job lets me use design tools at work, but most of the stuff I talk about on here is actually for home, and I’m trying to finally get some of it online as part of my “open source” roots.) is…

the lack of parts. Too often I come across a custom part that I want to incorperate in my circuit. When I draw my circuits by hand, thats not a problem, but to get these things online, I need to digitise them, and scan’s of my lab book just really arn’t suitable.

This time, I found the need for a particular Triode - the 6S7L - now, before people start asking me questions about Triodes and Valves in general, I don’t know, this is the first time I’ve ever used them and I’m getting quite excited over the prospect. But, Eagle Cad just dosn’t contain the parts for some of these devices, so more and more I’ve been building my own versions.

Because EagleCad is designed to go from a schematic straight to a board layout, I’ve found that I’ve been using the schematics as close as possible, but I really have been just throwing pins at the package file, meaning that if anyone where to convert my schamtics to boards then they will certainly a) be wrong and b) could be quite ammusing.

Anyways, this:

6sl7dual-triode

is the current working diagram. Yes I stole it. The book in question actually, “22 Radio and Receiver Projects for the Evil Genius” was a Christmas present, and I’m getting quite annoyed with the number of typing mistakes I’m finding in it. I wouldn’t say its the best book - you have to be a genius (or be good at pretending to be one) to understand some of the bits in it, but forging onwards … high voltage valve radio soon.

      

Syndicated 2009-01-02 14:22:29 from Holding the Soldering Iron by the Cold End

144 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!