Older blog entries for benad (starting at number 110)

A Tale of Two Shells

Last year, I completed "properly" learning the bash ("Bash"?) shell, using a combination of the book "Learning the bash Shell" and reading from start to finish the gigantic "man" page. And that was enough to convince me that, regardless of its ubiquity, I don't like it much, be it as a scripting language or as a command-line shell.

Having already learned tcsh, because it was the default shell on Mac OS X and is still popular, I was ready to try out more modern shells, rather than ones stuck in the 80s.

First, on Windows, DOS is quite archaic and annoying. Simple things like sleeping for a second require unintuitive workarounds. DOS batch scripting is painfully difficult, so I was eager to find something better. And since Windows 7, this strange "PowerShell" is now installed by default, and was heralded as a revolutionary step in command-line shells. Is it?

After reading a few tutorials, including this "free ebook" on powershell.com, things became clear. Windows PowerShell is essentially a shell built atop .NET that manipulates streams of objects rather than plain text across pipes, though thankfully formats the objects as plain text on the console screen by default. It provides many "cmdlets" that can manipulate that stream in a more convenient way than grep and awk. For example, listing all files in a directory greater than a megabyte is trivial in PowerShell, while on UNIX shell requires an awkward (pun intended) combination of extracting character positions and arithmetic. The "PowerShell IDE", also provided with PowerShell, can even perform tab-completion of the fields of the objects out of the current pipe, making it easy to extract their attributes. Because so much of the Windows internals are accessible using COM and .NET, it is easy to perform system administration with it, for example installing system services and querying their status. The only major issue I've found with PowerShell is that executing PowerShell scripts is completely disabled by default until unlocked by an administrator. Also, as a minor gripe, the version of PowerShell that came out of the box with Windows 7 is quite outdated. Overall, if there was a "grand vision" of ".NET" in Windows, it is best represented by PowerShell.

Back on UNIX-like systems, it seems like users are quite happy with old shells, or at least incremental evolutions of the old "Bourne shell" and Berkeley's "C Shell". Looking around, I found "fish", the "Friendly Interactive SHell". I liked its ironic tagline of "Finally, a command line shell for the 90s", since it was initially released in 2005. It is, indeed, friendly, as it has a deliberately limited set of features, and has default out-of-the-box functionality that makes interactive use enjoyable. It was built around a comprehensive design document that explicitly favours usability over compatibility with older or popular shells. The results are spectacular: Everything has colour, TAB and arrow keys completion with a type ahead preview in light grey, inline argument completion for most commands (including "man") that present interactively all the options and their meaning automatically extracted from the "man" pages, editing the configuration through a built-in web service, applying configuration changes to all shells instantly, I could go on. Its scripting is quite limited, but that may be a good thing considering the Shellshock bug (Not that "fish" has no security holes, but at least they're not "as designed").

Personally, I am ready to move to both PowerShell and "fish" for day-to-day use. While they both don't have much in common to older shells, they are far more usable. I highly recommend to all command-line users.

Syndicated 2014-10-28 01:37:41 from Benad's Blog

Moniker, the Security Weak Point

By the time I heard about the "Shellshock bug" security hole in the morning of September 25, already the small Debian Linux server that I built up in Ramnode to host my web site patched that security hole by itself. At any rate, my web site hosts only static pages, so it was never impacted.

While I am in control of the security of my web site starting from the Linux kernel, to the web server, up to the web pages it serves, I'm still dependent on its hosting service (Ramnode) to be secure. Another potential danger would be for someone to hijack the "benad.me" domain name, and make it point to a version of the site filled with malware and viruses. Sadly, this almost happened.

Back in 2008 I registered my domain through the registrar Moniker, which used to be recommended partly based on its security. They implemented additional features that one could pay to "lock" the domain and prevent unauthorized transfers from someone that would steal your user name and password. Since then though, the company was bought by another company, and what is now called Moniker is only by name, both in terms of staffing and software.

I did notice a difference in tone in email communications from the new Moniker. They seemed to be highly focused on domain name auctions, and would automatically auction off expired domains. This felt like a conflict of interest, as Moniker would deride higher profit auctioning off your domain than helping you renewing it. Of course, they would never do that on valuable customers that do "domain speculation" and own a large number of (unused) domains, but still that raises the suspicion that the company was sold based on the number of domains it had and how much money they can extract from large speculators rather than providing valuable customer service.

The "new" Moniker had a security hole in 2013, and to fix that Moniker forced users to change their password the next time they logged in. Note though that this happened with the old version of that web site. This summer, the parent company that bought Moniker (and its name) scrapped the old site's code and replaced it with a new broken, buggy interface. The new interface also brought with it worse security, and made the domain locking feature completely ineffective.

By early October, Moniker sent an email to all its users saying that for untold security reasons, all the account passwords would be reset. The shock was that the email contained both the user names and passwords of all the user's accounts. I was shocked that my old Moniker account, identified by a standard-looking user name, was placed under a parent, numerically-identified user name I've never seen, and another numerical sub-account that was created without my knowledge. It should be noted that I could never access the numerical sub-account, even when using the password provided in the email. Also, the email said that your new passwords must fit security requirements, including the use of at least one "special character", even though the passwords provided in the email didn't contain any special character, and when attempting to change passwords, it would refuse most special characters.

OK, I'm not a security expert, but sending user names and passwords in an email, refusing special characters (which would indicate that they don't use bcrypt), and resetting the passwords of all users may indicate that they were hacked. Badly. Moniker cited the Shellshock bug, but as reports of stolen domains started to appear, a user came forth saying that the security hole predated Shellshock by a month.

So, I was convinced that Moniker had a pattern of behaviour of not taking security seriously, that is until they experience a mass exodus of their customers. I started the process of domain name transfer the day after they announced the password reset, and I would recommend everybody else to do the same. I transferred to Namecheap. Despite its name, in my case the price was the same, though as a test I created a new empty account before the transfer, and already I could attest that they take security seriously, including emails for account activity (using secondary email addresses in rotation) and 2-factor authentication (using SMS for now). I completed the transfer yesterday, so that would explain why there was a little bit of downtime when resolving my domain name.

Syndicated 2014-10-15 01:21:21 from Benad's Blog

Eventual Consistency, Squared

Looking back at my article "The Syncing Problem", implementing a generic DVCS seems like a relatively straightforward solution. Actually, if the "data to sync" was simplified to plain text, existing DVCS like git or Mercurial may be sufficient. But there is a fundamental problem I glossed over that has huge ramifications on the design of the DVCS that make existing DVCS implementations dangerous to use.

In modern "Internet-connected" appliances, there are two storage solutions: On-device, and "in the cloud". It is the "cloud" storage that is going to be used for the devices to communicate to each other indirectly when performing data synchronization. There is though a huge behavioural difference between on-device and cloud storage: The cloud storage is "eventually consistent". Beneath its API, the cloud storage itself may also be distributed across machines, and modified data can take a little while to propagate to other machines. Essentially, if you upload a file from one device, it may take a little while for another device to see the change.

Sadly, whatever conflict resolution used by a cloud storage provide is unreliable, because their behaviour is either undocumented or inconsistent. Locking files on such storage may not be possible either. Worse, internal synchronization issues at the cloud provider may make their internal synchronization speed so inconsistent to make it unreliable as a means to communicate information between devices quickly. Almost all VCS (distributed or not) assume a reliable storage area for the version repository. Hosted VCS guarantee ACID. No DVCS was made to push revision information to an unreliable storage, and use that as the primary means to exchange information.

The easiest solution for this is to design a DVCS that supports "write-only" repositories. If the storage key (file name) contains the checksum of the data it contains, it may be possible to have multiple clients writing to the same storage area changes to the shared repository. Even if listing available "data blocks" is inconsistent on the shared storage, all it can do is augment the "knowledge" of what exists in the repository against the repository stored on local storage. The atomicity of the storage blocks should be as close as possible to the atomicity of a transactional version control delta, especially since the cloud storage may make information appear on other devices out-of-order of how they were written. That could make those "patch files" larger than well-optimized VCS, but on cloud storage we may not have any other option.

Sure, a write-only repository may be a big issue if the files in the version control are too large or if storage is limited, but then most VCS tend to avoid deleting historical data, and when they support "cleaning up a repository", the solutions are clumsy and error-prone. In our case, if a device prematurely deletes older historical data in the shared storage, by being unaware that other devices were synced at older versions and may branch from there, then this would be tantamount to using the share storage to host only the latest version and nothing else. All to say, deleting historical data in a shared, eventually-consistent storage is a difficult problem that may involve a lot of tuning based on how long devices can stay unsynchronized before considering them "lost", compared to how fast the cloud storage is expected to be consistent.

Syndicated 2014-10-04 15:05:19 from Benad's Blog

Client-Side JavaScript Modules

I dislike the JavaScript programming language. I despise Node.js for server-side programming, and people with far more experience than me with both also agree, for example Ted Dziuba in 2011 and more recently Eric Jiang. And while I can easily avoid using Node as a server-side solution, the same cannot be said about avoiding JavaScript altogether.

I recently discovered Atom, a text editor based on a custom build of WebKit, essentially running on HTML, CSS and JavaScript. Though it is far from the fastest text editor out there, it feels like a spiritual successor to my favourite editor, jEdit, but based on modern web technologies rather than Java. The net effect is that Atom seems like the fastest growing text editor, and with its deep integration with Git (it was made by Github), it makes it a breeze to change code.

I noticed a few interesting things that were used to make JavaScript more tolerable in Atom. First, it supports CoffeeScript. Second, it uses Node-like modules.

CoffeeScript was a huge discovery for me. It is essentially a programming language that compiles into JavaScript, and makes JavaScript development more bearable. Its syntax reminds me a bit of the syntax difference between Java and Groovy. There's also the very interesting JavaScript to CoffeeScript converter called js2coffee. I used js2coffee on one of my JavaScript module, and the result was far more readable and manageable.

The problem with CoffeeScript is that you need to integrate its compilation to JavaScript somewhere. It just so happens that its compiler is a command-line JavaScript tool made for Node. A JavaScript equivalent to Makefiles (actually, more like Maven) is called Grunt, and from it you can call the CoffeeScript compiler directly, on top of UglifyJS to make the generated output smaller. All of these tools exist under node_modules/.bin when installed locally using npm, the Node Package Manager.

Also, by writing my module as a Node module (actually, CommonJS), I could also use some dependency management and still deploy it for a web browser's environment using Browserify. I could even go further and integrate it with Jasmine for unit tests, and run them in a GUI-less full-stack browser like PhantomJS, but that's going too far for now, and you're better off reading the Browserify article by Bastian Krol for more information.

It remains that Browserify is kind of a hack that isn't ideal for running JavaScript modules in a browser, as it has to include browser-equivalent functionality that is unique to Node and isn't optimized for high-latency asynchronous loading. A better solution for browser-side JavaScript modules is RequireJS, using the module format AMD. While not all Node modules have an AMD equivalent, the major ones are easily accessible with bower. Interestingly, you can create a module that can as AMD, Node and natively in a browser using the templates called UMD (as in "Universal Module Definition"). Also, RequireJS can support Node modules (that don't use Node-specific functionality) and any other JavaScript library made for browsers, so that you can gain asynchronous loading.

It should be noted that bower, grunt and many command-line JavaScript tools are made for Node and installed locally using npm, So, even if "Node as a JavaScript web server" fails (and it should), using Node as an environment for local JavaScript command-line tools works quite well and could have a great future.

After all is said and done, I now have something that is kind of like Maven, but for JavaScript using Grunt, RequireJS, bower and Jasmine, to download, compile (CoffeeScript), inject and optimize JavaScript modules for deployment. Or you can use something like CodeKit if you prefer a nice GUI. Either way, JavaScript development, for client-side software like Atom, command-line scripts or for the browser, is finally starting to feel reasonable.

Syndicated 2014-08-15 03:14:41 from Benad's Blog

The Case for Complexity

Like clockwork, there is a point in a programmer's career where one realizes that most programming tools suck, that not only they hinder the programmer's productivity, but worse may have an impact on the quality of the product for end users. And so, there are cries of the absurdity of it all, some posit that complex software development tools must exist because some programmers like complexity above productivity, while others long for the days where programming was easier.

I find these reactions amusing. Kind of a middle-life crisis for programmers. Trying to rationalize their careers, most just end up admitting defeat for a professional life of mediocrity, by using dumber tools and hoping to avoid the main reason why programming can be challenging. I went into that "programmer's existential crisis" in my third year as a programmer, just before deciding on making it a career, but I went out of it with what seems to be a conclusion seldom shared by my fellow programmers. To some extent this is why I don't really consider myself a programmer but rather a software designer.

The fundamental issue isn't the fact that software is (seemingly) unnecessarily complex, but rather trying to understand the source of that complexity. Too many programmers assume that programming is based on applied mathematics. Well, it ought to be, but programming as practiced in the industry is quite far from its computer science roots. That deviation isn't due only from programming mistakes, but due to the more irrational external constraints and requirements. Even existing bugs become part of the external constraints if they are in things you cannot fix but must "work around".

Those absurdities can come from two directions: Top-down, based on human need and mental models, or Bottom-up, based on faulty mathematical or software design models. Productive and efficient software development tools, by themselves, bring complexity above the programming language. Absurd business requirements, including cost-saving measures and dealing with buggy legacy systems not only bring complexity, but the workarounds they require bring even more absurd code.

Now, you may argue that abstractions make things simpler, and to some extent, they are. But abstractions only tend to mask complexity, and when things break or don't work as expected, that complexity re-surfaces. From the point of view of a typical user, if it's broken, you ask somebody else to fix it or replace it. But being a programmer is being that "somebody else" that takes responsibility into understanding, to some extent, that complexity.

You could argue that software should always be more usable first. And yet, usable software can be far more difficult to implement than software that is more "native" to its computing environment. All those manual pages, the flexible command-line parameters, those adaptive GUIs, pseudo-AIs, Clippy, and so on, bring enormous challenges to the implementation of any software because humans don't think like machines, and vice-versa. As long as users are involved, software cannot be fully "intuitive" for both users and computers at the same time. Computers are not "computing machines", but more sophisticated state machines made to run useful software for users. Gone are the days where room-sized computers just do "math stuff" for banks, where user interaction was limited to numbers and programmers. The moment there were personal computers, people didn't write "math-based software", but rather text-based games with code of dubious quality.

Complexity of software will always increase, because it can. Higher-level programming languages become more and more removed from the hardware execution model. Users keep asking for more features that don't necessarily "fit well", so either you add more buttons to that toolbar, or you create a brand new piece of software with its own interfaces. Even if by some reason computers stopped getting so much faster over time, it wouldn't stop users from asking for "more", and programmers from asking for "productivity".

My realization was that there has to be a balance between always increasing complexity and our ability to understand it. Sure, fifty years ago it would be reasonable to have a single person spend a few years to fully understand a complete computer system, but nowadays we just have to become specialized. Still, specialization is possible because we can understand a higher-level conceptual design of the other components rather than just an inconsistent mash up of absurdity. Design is the solution. Yes, things in software will always get bigger, but we can make it more reasonable to attempt to understand it all if, from afar, it was designed soundly rather than just accidentally "became". With design, complexity becomes a bit smaller and manageable, and even though only the programmers will have to deal with most of that complexity, good design produce qualities that become visible up to the end users. Good design makes tighter "vertical integration" easier since making sense of the whole system is easier.

Ultimately, making a better software product for the end users requires the programmer to take responsibility for the complexity of not only the software's code, but also of its environment. That means using sound design for any new code introduced, and accepting the potential absurdity of the rest. If you can't do that, then you'll never be more than a "code monkey".

Notes

  1. Many programmers tend to assume that their code is logically sound, and that their errors are mostly due to menial mistakes. In my experience, it's the other way around: The buggiest code is produced when code isn't logically sound, and this is what happens most of the time, especially in scripting languages that have weak or implicit typing.
  2. I use the term "complexity" more as the number of module connections than the average of module coupling. I find "complexity as a sum" more intuitive from the point of view of somebody that has to be aware of the complete system: Adding an abstraction layer still adds a new integration point between the old and new code, adding more things that could break. This is why I normally consider programming tools added complexity, even though their code completion and generation can make the programmers more productive.

Syndicated 2014-07-31 02:14:53 from Benad's Blog

Running Final Fantasy VII (Steam) on Mac

The recent re-release of Final Fantasy VII on PC and Steam at last made the game easily accessible on modern computers. No need for MIDI drivers or the Truemotion codec, since they were replaced with Ogg-vorbis music and On2 VP8 video files. As an added bonus, the game can sync the save files "to the cloud", and has a way of "boosting" the character stats in those save files to make the game easier if necessary.

But then, the game is made only for Windows, and I only have a Mac. Sure, I can use Bootcamp or a virtual machine, but I'd rather play it on the Mac itself than install and maintain a Windows machine. And not everybody has spare Windows licenses anyway.

So I attempted to use Wine, the "not a Windows emulator". I don't know why each time I use Wine, I'm still pleasantly surprised at how well it works. Sure, the very early versions of Wine were quite unstable, but that was a decade ago. Nowadays, at least 90% of Windows software can run quite well in Wine, without too much effort or hacking, and it just keeps getting better each time I try it.

Now, the latest version of Wine can run Steam quite well. Inside of that Steam, installing and running Final Fantasy VII worked flawlessly.

Setting up Wine and Steam on Macs is quite easy:

  1. Install Xcode. Go in Xcode, Preferences, Downloads and install the command-line tools.
  2. Install MacPorts.
  3. In the Terminal, run sudo port install wine-devel winetricks.
  4. Run winecfg, and close the window.
  5. Run winetricks steam.
  6. Run env WINEPREFIX="$HOME/.local/share/wineprefixes/steam" wine C:\\windows\\command\\start.exe /Unix ~/.local/share/wineprefixes/steam/dosdevices/c:/users/Public/Start\ Menu/Programs/Steam/Steam.lnk

And that's it. When installing the game I checked the option to install a shortcut in the Start menu, and I've made a small tool called run_desktop.pl to launch the game directly from the Mac. For example, I would start it with perl run_desktop.pl ~/.local/share/applications/wine/Programs/Steam/FINAL\ FANTASY\ VII.desktop.

Syndicated 2014-07-12 17:50:24 from Benad's Blog

Resuming Final Fantasy with VII

I've never played Final Fantasy VII. Why? Simply put, I've never owned a PlayStation. Considering that countless players say it is the best Final Fantasy game, if not the dubious claim that it's the best video game ever made, I have to try it. Since both FF VII and FF VIII were released on Steam (though Windows-only), they are now finally easily (and legally) accessible to me without having to own a PlayStation console.

I'm skipping too much context. FF VII was a highly influential game. Not only was it the first 3D Final Fantasy, but also one of the first (Japanese) RPG played by a new generation of video game players. It was one of the highest sold video game and was hugely popular outside of Japan. So how come I avoided it for all those years?

Back then, I played pretty much all Final Fantasy games available in North America up to Final Fantasy VI, and since then I played the previously unreleased ones (FF II, III and V). I even played many of the side-franchises of Final Fantasy (Legend, Mystic Quest, and so on), and pretty much all RPGs made by Square available on Nintendo consoles.

But then, in a shrewd move, Square became Sony-exclusive. In exchange, Sony would create a huge marketing campaign to promote FF VII across all of its product lines (movies, music, publications, electronics, DVDs...). At the same time, it heralded a generation of early 3D games that were mixed with pre-rendered backdrops (akin to classical animation) and FMV cut scenes, making video games more accessible, movie-like and appealing to non-gamers. Basically, it became the forbearer of everything I hated about high-budgeted video games that were more movies "for the masses" than games.

Over the years, I couldn't escape its influence, even in other media. I had the misfortune of seeing the overly pretentious Final Fantasy movies (Spirits Within and that FF VII sequel/thing), and the effect it had on Square porting their Nintendo-based RPGs to the PlayStation by replacing important cut scenes with ill-conceived FMVs. Not playing anything else from the Final Fantasy series was my own kind of "rebellion".

What made it infuriating to me was that so many people considered FF VII the "best video game ever", but without having played any previous game in the series, summarily dismissed because they don't contain FMVs or because they aren't in 3D. Those that would defend FF VI as being objectively the better game would also be dismissed using the dubious argument that people prefer the first Final Fantasy they played. I'm fortunate enough that the first Final Fantasy game I played was the original game in the series (the one released in the NES in North America), and I clearly don't consider it the best, far from it. At the same time, I do hold Final Fantasy VI as the best Final Fantasy game up to that point in the series, the best RPG that I've ever played and a masterpiece. I'd be pleasantly surprised if FF VII was better than VI and lived up to its hype and marketing.

Now, 15 years later, I'm emotionally distanced enough from that teen rebellion to attempt to play this game more objectively and on its own merit, apart from the Sony hype (and my hate of Sony) and the opinion of other uneducated players. I already played a few hours of the game and I already have lots to say, but I reserve judgment until I complete the game. For now, I'll follow up on how I'm able to play that game on my Mac in the first place.

Syndicated 2014-07-12 17:23:36 from Benad's Blog

Google Play Music in Canada

One of the things I noticed when I got the Nexus 5 phone was that it pales as an audio player compared to the iPhone. What I didn't mention is also how much the built-in Google Play Music app was spartan. All I could do to play music was to transfer audio files by a USB cable. While it is nice that it auto-scans the storage for music file, I would have preferred a cleaner approach where I could manually add to my music collection specific audio files.

Since I recycled my iPhone 4S as my main music and podcast player, I seldom looked at the Play Music app, until in early June I casually looked at it after an app update and noticed it had quite a few more options. Basically, "Google Play Music All Access" was now available in Canada.

The first thing that jumped at me is that you can upload your music files in your Play Music library, for free. It can even automatically import your entire iTunes library. I tried it with mine, which has about 3800 songs and two dozen playlists, and it worked quite well. There was a minor issue with the file importer ignoring all music files that had accents in their file names, so I had to import those files manually, but still, compared to the non-free iTunes Match service, the experience was amazingly smooth. To compare, Play Music never, ever had any issue importing and playing back my non-matched audio files (meaning, songs not part of the Play Music store), while I constantly run into issues and general slowness with iTunes Match. Yes, the $30 / year service from iTunes is worse than the free Google service, and the Google service doesn't require you to install iTunes in the first place, for all you iTunes haters out there.

So, for the "All Access" thing, it is quite similar to Rdio. I talked about music streaming services in the past, but to give you a summary of the what happened since in Canada, not much. There's still only Rdio, Deezer, and a few small ones with even smaller libraries. So, doing the legal thing with copyright in Canada still sucks.

After using Rdio for a few years, there are still a few things that annoy me, especially given that it's quite an expensive service ($10 per month):

  • By default, Rdio is still highly "social", with everything shared and public by default. And by that I mean your playback history, playlists, listening habits, what you're listening to right now, and more. They compromised by introducing a "private" mode where things are shared with your Rdio friends, but still that sucks.
  • The music collection oddly seems to be shrinking over time. Sure, I noticed that a lot of albums have exact duplicates, with the only distinction being the label that published the album. This creates the weird effect that if you add to your collection one of those albums, but chance it may get delisted and you'll have to hunt the new "owner". Still, half the time stuff gets delisted and never comes back, almost as often as those "1-year contracts" you see on Netflix Canada.
  • Playlist management sucks. Sure, they just added the functionality to manage playlists on iOS, but still, on all platforms you cannot manipulate more than a song at the same time. For example, breaking the album "Mozart: The Complete Operas" into manageable playlists took hours, moving songs one by one. Shift-click?
  • Search, and pretty much the entire GUI on desktop or iOS, is somehow slow. Looking for "that song" doesn't work well and can be frustrating.
  • If you mark songs to be synchronized to mobile devices, it will be so for all your devices. Doesn't make sense if you only carry one device with you, and on top of that the massive bandwidth usage this can generate if you have four devices downloading songs at the same time.

So, I had to try "All Access" from Google, at least to compare. Also, because the first month is free and if you sign up before June 30 (meaning, today or tomorrow), it is $8 instead of $10 per month, for life.

So, how does it stack up compared to Rdio? Let's start with the cons first, to be fair.

Cons

  • The user reviews from the Play Music store don't show up in the player view. I miss the inline user reviews from Rdio.
  • No global playback history. Sure, history of radios show up in the current queue, but once the queue is cleared, it is gone. Also, playback queues aren't shared across devices, unless you save them as playlists each time.
  • Mobile support is limited to iOS and Android.
  • The "Thumbs up" automatic playlist isn't sorted the same way across devices for some reason.

Pros

  • All your music is there. Personal, bought on Play Music, or "All Access".
  • Search. It's Google. Search is amazing and instant.
  • The GUI is amazingly fast on all platforms.
  • The album library is cleaner (no label duplicates).
  • The library is somewhat bigger than Rdio. For every one "vanishing" album on Rdio, there are three or four that exist only on All Access.
  • Playlist and metadata editing is great. Shift-click to select multiple songs works, and it quite powerful.
  • Song caching for offline playback is done per-device, not as a global setting as on Rdio.

Overall, Google Play Music All Access, apart from its stupid name ("Rdio" has four letters) wins hands down. I think I'll transition from Rdio to All Access in the next few months. Pricing is identical, for feature-wise, All Access is much cleaner, faster and bigger that Rdio. And the (amazingly) completely free "My Library" music upload service is just icing on the cake.

As for iTunes, Apple should get their act together. iTunes Match is plain buggy and slow, searching (local or in the store) is even slower, their Podcasting support is broken beyond repair (avoid it completely), and everybody on Windows hate iTunes, for good reason. Sure, iTunes Radio, Beats, and so on, but they're all US-only, so why should I care if it's going to take until 2016 until we see anything of them in Canada? Same can be said of Spotify, Pandora, Amazon Music, and so on: Until you show up internationally outside of that Silicon Valley bubble, put up or shut up. Google Play Music All Access is there internationally now.

Syndicated 2014-06-29 00:05:25 from Benad's Blog

iOS and Android Development Tools

As I previously mentioned, I've started doing some iOS and Android development. While I haven't done any App of reasonable size in either, I've read a few books and developed some "beginner's software in both, namely "Android Programming: The Big Nerd Ranch Guide" and "Learn iOS 7 App Development". I've also delved a little bit deeper with both platforms, specifically Core Animation and Android NDK.

And, being opinionated about everything, I quickly formed an opinion of the development tools for iOS, with Xcode, and Android, with Eclipse ADT and Android Studio.

Xcode for iOS

Xcode does represent well Apple's aesthetics in software: Over-simplified GUI, limited, but if you can live within those limits it is highly efficient. The GUI is hit-and-miss, and it lacks a ton of features you can now expect from modern IDEs, for example refactoring. If you want a more advanced IDE for Objective-C, you may want to look at AppCode from JetBrains, the makers of IntelliJ IDEA.

Its build system is similar to Microsoft Visual Studio, in the sense that it has projects with various settings, and it compiles your code with its own proprietary system. You can use the command-line xcodebuild to build an Xcode project from the command-line. If you have to use Makefiles to build some cross-platform code, you may want to look at MacPorts, or use the xcrun command to look for the compiler tools specific to an SDK (xcrun --sdk iphoneos ...). Of course, you can add custom build steps that run custom scripts that in turn will execute your external Makefiles.

Still, you won't want to leave Xcode much. Everything is well integrated in there, including the "Quick Help" in the side bar, simple packaging and signing of the iOS app packages, running and debugging with an emulator or a real device plugged in with a USB cable, and so on. It is a complete, self-contained environment, with no surprises. Considering that most of it is closed-source and that you'd have no other option if it didn't work well, it is comforting that it is quite stable.

It should be noted that Xcode includes a superbly packaged set of documentation. The quality of the documentation is very high and comes very close to MSDN.

ADT and Android Studio

If I were asked a few years ago to develop a mobile OS, I would have surely done something quite similar to Android. Based of Java (at least, its API, so Dalvik), Linux (but with a simplified user-space API, so Bionic), a bunch of XML files, a hacked version of Eclipse, and so on. This sounds like praise, but it's not, considering how wrong I were and how much I didn't know any better back then.

Let's start with the Eclipse-based Android Developer Tool. Whatever was there on Google's web site was broken out of the box. Its update mechanism had missing update server sources, so updating it would break its Android 4.4 support. Usability is garbage, but that's expected from Eclipse. The emulator is shockingly slow, even on my bleeding-edge Intel Haswell i7 with 16 GB or RAM. Of course, you can use the virtualized Android VM for Intel chips, but the version that was there a few months ago would crash Mac OS X 10.9, and once you fix it with a patch from Intel, it's still twice slower than the iPhone simulator. Oh, and rotations in the Android 4.4 simulator don't work. Go figure.

Of course, I could switch one beta-quality IDE, ADT, to another beta-quality IDEA-based one, Android Studio. It sucks and it's buggy, but just slightly less so than ADT. It insists on converting the integrated build system of ADT to a bunch of equally obscure Gradle plugins. In theory this is more flexible, but editing a Gradle build file is as intuitive as Maven, meaning not at all.

Oh, and if after all those dire warnings that you should not attempt to develop native, non-Java code on Android you still do so with the Native Development Kit (NDK), you'll be slapped in the face with a horrible hacked build system built atop Makefiles (ndk-build). Of course, NDK with its Makefiles hack doesn't integrate well with ADT, and can't seem to be integrated at all with Android Studio. You can also forget about your slightly faster virtualized Intel Android VM: Back to the slow ARM emulator.

Basically, Android development tools suck. Plan ahead a few days of work to set it up.

Hey, I could have been a video game programmer for game consoles, so I should stop complaining about crappy development environments.

Syndicated 2014-05-27 00:27:14 from Benad's Blog

OpenSSL: My Heart is Bleeding

After a week, I think I can comfortably explain what happened with this "heartbleed" OpenSSL bug. Now, everybody make mistakes. Especially programmers. Especially me. But at least my errors didn't create a major security hole in 20% of the Internet. Let's review some basic tenets of Software Engineering:

  1. All code (of minimal size and complexity) has bugs. Less code and complexity (and functionality) means less bugs.
  2. Software should be made resilliant against errors. If it can't, it should at least halt (crash).
  3. Software should be designed for Humans, both the code and user interface.

Out of hubris, excess and arrogance, the OpenSSL developers managed to do the opposite of all of these tenets. To quote Theo de Raadt:

OpenSSL is not developed by a responsible team.

Why? Let's do some investigation.

First, Robin Seggelmann had this idea to add a completely unnecessary "heartbeat" feature to TLS. Looking at the protocol design alone, the simple fact that the size of the payload exists in two different places (TLS itself and Heartbeat) is pretty bad and begs for a security hole. Anyway, tenet one.

Still, Seggelmann went ahead and sent working code a year later, on December 31st, at 11:59 PM, the best time for a code review. Of course, the code is filled with non-descriptive variable names which hide the error in plain sight during the ineffective code review, but given the poor quality of the OpenSSL code, they find this acceptable. That's tenet three.

At this point, you may ask: "Shouldn't most modern malloc implementations minimally protect software against buffer overflows and overreads?" If you did, you are correct. But then, years ago, OpenSSL implemented their own memory allocation scheme. If you try to revert that back to plain malloc, OpenSSL doesn't work anymore because its code has bugs that depends on memory blocks being recycled in LIFO fashion. That's tenet two.

The result is bad, and very, very real. In Canada, nearly a thousand Social Insurance Numbers were leaked. And that doesn't count or even start to imagine how many private keys and information leaked like that over the past two years.

By the way, this kind of mess have been my experience with cryptographic software. The usability problem with cryptography isn't just for end users, but also the code itself. Using single-letter variables in a mathematical context where each variable is described at length may be acceptable, but meaningless variable letters without comments in code isn't. While I don't mind much about such "math code" in data compression, for security this makes the code less likely to be secure. Basically, everybody think that being smart is sufficient for writing good code, so of course they would be offended if a software engineer would recommend writing the code from their specs instead of letting them do it themselves. No wonder the worst code always comes from university "careerists".

Personally, I'd stop using OpenSSL as soon as possible.

Syndicated 2014-04-15 01:29:07 from Benad's Blog

101 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!