Older blog entries for mdz (starting at number 17)

DevOps and Cloud

DevOps

I first heard about DevOps from Lindsay Holmwood at linux.conf.au 2010. Since then, I’ve been following the movement with interest. It seems to be about cross-functional involvement in software teams, specifically between software development and system administration (or operations). In many organizations, especially SaaS shops, these two groups are placed in opposition to each other: developers are driven to deliver new features to users, while system administrators are held accountable for the operation of the service. In the best case, they maintain a healthy balance by pushing in opposite directions, but more typically, they resent each other for getting in the way, as a result of this dichotomy:

Development Operations
is responsible for… creating products offering services
is measured on… delivery of new features high reliability
optimizes by… increasing velocity controlling change
and so is perceived as… reckless and irresponsible obstructing progress

Of course, both functions are essential to a viable service, and so DevOps aims to replace this opposition with cooperation. By removing this friction from the organization, we hope to improve efficiency, lower costs, and generally get more work done.


So, DevOps promotes the formation of cross-functional teams, where individuals still take on specialist “development” or “operations” roles, but work together toward the common goal of delivering a great experience to users. By working as teammates, rather than passing work “over the wall”, they can both contribute to development, deployment and maintenance according to their skills and expertise. The team becomes a “devops” team, and is responsible for the entire product life cycle. Particular tasks may be handled by specialists, but when there’s a problem, it’s the team’s problem.

Some take it a step further, and feel that what’s needed is to combine the two disciplines, so that individuals contribute in both ways. Rather than thinking of themselves as “developers” or “sysadmins”, these folks consider themselves “devops”. They work to become proficient in both roles, and to synthesize new ways of working by drawing on both types of skills and experience. A common crossover activity is the development of sophisticated tools for automating deployment, monitoring, capacity management and failure resolution.

DevOps meets Cloud

Like DevOps, cloud is not a specific technology or method, but a reorganization of the model (as I’ve written previously). It’s about breaking down the problem in a different way, splitting and merging its parts, and creating a new representation which doesn’t correspond piece-for-piece to the old one.

DevOps drives cloud because it offers a richer toolkit for the way they work: fast, flexible, efficient. Tools like Amazon EC2 and Google App Engine solve the right sorts of problems. Cloud also drives DevOps because it calls into question the traditional way of organizing software teams. A development/operations division just doesn’t “fit” cloud as well as a DevOps model.

Deployment is a classic duty of system administrators. In many organizations, only the IT department can implement changes in the production environment. Reaping the benefits of an IaaS environment requires deploying through an API, and therefore deployment requires development. While it is already common practice for system administrators to develop tools for automating deployment, and tools like Puppet and Chef are gaining momentum, IaaS makes this a necessity, and raises the bar in terms of sophistication. Doing this well requires skills and knowledge from both sides of the “fence” between development and operations, and can accelerate development as well as promote stability in production.


This is exemplified by infrastructure service providers like Amazon Web Services, where customers pay by the hour for “black box” access to computing resources. How those resources are provisioned and maintained is entirely Amazon’s problem, while its customers must decide how to deploy and manage their applications within Amazon’s IaaS framework. In this scenario, some operations work has been explicitly outsourced to Amazon, but IaaS is not a substitute for system administration. Deployment, monitoring, failure recovery, performance management, OS maintenance, system configuration, and more are still needed. A development team which is lacking the experience or capacity for this type of work cannot simply “switch” to an IaaS model and expect these needs to be taken care of by their service provider.


With platform service providers, the boundaries are different. Developers, if they build their application on the appropriate platform, can effectively outsource (mostly) the management of the entire production environment to their service provider. The operating system is abstracted away, and its maintenance can be someone else’s problem. For applications which can be built with the available facilities, this will be a very attractive option for many organizations. The customers of these services may be traditional developers, who have no need for operations expertise. PaaS providers, though, will require deep expertise in both disciplines in order to build and improve their platform and services, and will likely benefit from a DevOps approach.

Technical architecture draws on both development and operations expertise, because design goals like performance and robustness are affected by all layers of the stack, from hardware, power and cooling all the way up to application code. DevOps itself promotes greater collaboration on architecture, by involving experts in both disciplines, but cloud is a great catalyst because cloud architecture can be described in code. Rather than talking to each other about their respective parts of the system, they can work together on the whole system at once. Developers, sysadmins and hybrids can all contribute to a unified source tree, containing both application code and a description of the production environment: how many virtual servers to deploy, their specifications, which components run on which servers, how they are configured, and so on. In this way, system and network architecture can evolve in lockstep with application architecture.

Cloudy promises such as dynamic scaling and fault tolerance call for a DevOps approach in order to be realized in a real-world scenario. These systems involve dynamically manipulating production infrastructure in response to changing conditions, and the application must adapt to these changes. Whether this takes the form of an active, intelligent response or a passive crash-only approach, development and operational considerations need to be aligned.

So what?

DevOps and cloud will continue to reinforce each other and gain momentum. Both individuals and organizations will need to adapt in order to take advantage of the opportunities provided by these new models. Because they’re complementary, it makes sense to adopt them together, so those with expertise in both will be at an advantage.


Syndicated 2010-06-08 10:28:23 from We'll see | Matt Zimmerman

Summary of development plans for Ubuntu 10.10

With the 10.10 developer summit behind us, several teams have published engineering plans for the 10.10 release cycle, including:

  • The Desktop team plan, via Rick Spencer, includes both Desktop Edition and Netbook Edition, and covers Software Center, GNOME, application and browser selection (including Chromium), X11 and social networking
  • The Server team plan, via Jos Boumans, covers migrating services to Upstart, improvements to the mail stack, cloud and cluster storage systems, cloud deployment and load balancing, Ubuntu Enterprise Cloud, and Java
  • The Foundations team plan, via Duncan McGreggor, covers system initialization, btrfs, cleaning house, improvements to the installers, Software Center (also) and Upstart (itself), along with a batch of miscellaneous projects, including compiling Ubuntu for the i686 instruction set
  • The Kernel team plan, via Leann Ogasawara, covers the choice of kernel 2.6.35, the current Ubuntu patch set, the configuration of our various kernel flavours, bug management, and the availability of backports of newer kernels to Ubuntu LTS releases

Syndicated 2010-05-31 10:41:00 from We'll see | Matt Zimmerman

Extracting files from a nandroid backup using unyaffs

I recently upgraded my G1 phone to the latest Cyanogen build (5.x). Since the upgrade instructions recommend wiping user data, I made a “nandroid” backup first, using the handy Amon_RA recovery image. I’ve gotten pretty familiar with the Android filesystem layout, and was confident I could restore anything I really missed (such as my wpa_supplicant.conf with all of my WiFi credentials).

It wasn’t until I finished with the upgrade that I realized the backup wasn’t trivial to work with. It’s a raw yaffs2 flash image, which can’t be simply mounted on a loop device. After messing around for a bit with the nandsim module, mtd-utils and the yaffs2 kernel module, I realized there was a much simpler way: the unassuming unyaffs. It says that it can only extract images created by mkyaffs2image, but apparently the images in the nandroid backup are created this way (or otherwise compatible with unyaffs).

So I downloaded and built unyaffs:


svn checkout http://unyaffs.googlecode.com/svn/trunk/ unyaffs
cd unyaffs
gcc -o unyaffs unyaffs.c

and then ran it on the backup image:


mkdir g1data && cd g1data # unyaffs extracts into the current directory
~/src/android/unyaffs/unyaffs /media/G1data/nandroid/HT839GZ23983/BCDS-20100529-1311/data.img

At which point I could restore files one by one, e.g.:


adb push /tmp/g1data/misc/wifi/wpa_supplicant.conf /data/misc/wifi/

After toggling WiFi off and then back on, all of my credentials were restored. I was able to restore preferences for various applications in the same way.


Syndicated 2010-05-29 19:24:29 from We'll see | Matt Zimmerman

Rethinking the Ubuntu Developer Summit

This is a repost from the ubuntu-devel mailing list, where there is probably some discussion happening by now.

After each UDS, the organizers evaluate the event and consider how it could be further improved in the future. As a result of this process, the format of UDS has evolved considerably, as it has grown from a smallish informal gathering to a highly structured matrix of hundreds of 45-to-60-minute sessions with sophisticated audiovisual facilities.

If you participated in UDS 10.10 (locally or online), you have hopefully already completed the online survey, which is an important part of this evaluation process.

A survey can’t tell the whole story, though, so I would also like to start a more free-form discussion here among Ubuntu developers as well. I have some thoughts I’d like to share, and I’m interested in your perspectives as well.

Purpose

The core purpose of UDS has always been to help Ubuntu developers to explore, refine and share their plans for the subsequent release. It has expanded over the years to include all kinds of contributors, not only developers, but the principle remains the same.

We arrive at UDS with goals, desires and ideas, and leave with a plan of action which guides our work for the rest of the cycle.

The status quo

UDS looks like this:

This screenshot is only 1600×1200, so there are another 5 columns off the right edge of the screen for a total of 18 rooms. With 7 time slots per day over 5 days, there are over 500 blocks in the schedule grid. 9 tracks are scattered over the grid. We produce hundreds of blueprints representing projects we would like to work on.

It is an impressive achievement to pull this event together every six months, and the organizers work very hard at it. We accomplish a great deal at every UDS, and should feel good about that. We must also constantly evaluate how well it is working, and make adjustments to accommodate growth and change in the project.

How did we get here?

(this is all from memory, but it should be sufficiently accurate to have this discussion)

In the beginning, before it was even called UDS, we worked from a rough agenda, adding items as they came up, and ticking them off as we finished talking about them. Ad hoc methods worked pretty well at this scale.

As the event grew, and we held more and more discussions in parallel, it was hard to keep track of who was where, and we started to run into contention. Ubuntu and Launchpad were planning their upcoming work together at the same time. One group would be discussing topic A, and find that they needed the participation of person X, who was already involved in another discussion on topic B. The A group would either block, or go ahead without the benefit of person X, neither of which was seen to be very effective. By the end of the week, everyone was mentally and physically exhausted, and many were ill.

As a result, we decided to adopt a schedule grid, and ensure that nobody was expected to be in two places at once. Our productivity depended on getting precisely the right people face to face to tackle the technical challenges we faced. This meant deciding in advance who should be present in each session, and laying out the schedule to satisfy these constraints. New sessions were being added all the time, so the UDS organizers would stay up late at night during the event, creating the schedule grid for the next day. In the morning, over breakfast, everyone would tell them about errors, and request revisions to the schedule. Revisions to the schedule were painful, because we had to re-check all of the constraints by hand.

So, in the geek spirit, we developed a program which would read in the constraints and generate an error-free schedule. The UDS organizers ran this at the end of each day during the event, checked it over, and posted it. In the morning, over breakfast, everyone would tell them about constraints they hadn’t been aware of, and request revisions to the schedule. Revisions to the schedule were painful, because a single changed constraint would completely rearrange the schedule. People found themselves running all over the place to different rooms throughout the day, as they were scheduled into many different meetings back-to-back.

At around this point, UDS had become too big, and had too many constraints, to plan on the fly (unconference style). We resolved to plan more in advance, and agree on the scheduling constraints ahead of time. We divided the event into tracks, and placed each track in its own room. Most participants could stay in one place throughout the day, taking part in a series of related meetings except where they were specifically needed in an adjacent track. We created the schedule through a combination of manual and automatic methods, so that scheduling constraints could be checked quickly, but a human could decide how to resolve conflicts. There was time to review the schedule before the start of the event, to identify and fix problems. Revisions to the schedule during the event were fewer and less painful. We added keynote presentations, to provide opportunities to communicate important information to everyone, and ease back into meetings after lunch. Everyone was still exhausted and/or ill, and tiredness took its toll on the quality of discussion, particularly toward the end of the week.

Concerns were raised that people weren’t participating enough, and might stay on in the same room passively when they might be better able to contribute to a different session happening elsewhere. As a result, the schedule was randomly rearranged so that related sessions would not be held in the same room, and everyone would get up and move at the end of each hour.

This brings us roughly to where things stand today.

Problems with the status quo

  1. UDS is big and complex. Creating and maintaining the schedule is a lot of work in itself, and this large format requires a large venue, which in turn requires more planning and logistical work (not to mention cost). This is only worthwhile if we get proportionally more benefit out of the event itself.
  2. UDS produces many more blueprints than we need for a cycle. While some of these represent an explicit decision not to pursue a project, most of them are set aside simply because we can’t fit them in. We have the capacity to implement over 100 blueprints per cycle, but we have *thousands* of blueprints registered today. We finished less than half of the blueprints we registered for 10.04. This means that we’re spending a lot of time at UDS talking about things which can’t get done that cycle (and may never get done).
  3. UDS is (still) exhausting. While we should work hard, and a level of intensity helps to energize us, I think it’s a bit too much. Sessions later in the week are substantially more sluggish than early on, and don’t get the full benefit of the minds we’ve brought together. I believe that such an intense format does not suit the type of work being done at the event, which should be more creative and energetic.
  4. The format of UDS is optimized for short discussions (as many as we can fit into the grid). This is good for many technical decisions, but does not lend itself as well to generating new ideas, deeply exploring a topic, building broad consensus or tackling “big picture” issues facing the project. These deeper problems sometimes require more time. They also benefit tremendously from face-to-face interaction, so UDS is our best opportunity to work on them, and we should take advantage of it.
  5. UDS sessions aim for the minimum level of participation necessary, so that we can carry on many sessions in parallel: we ask, “who do we need in order to discuss this topic?” This is appropriate for many meetings. However, some would benefit greatly from broader participation, especially from multiple teams. We don’t always know in advance where a transformative idea will come from, and having more points of view represented would be valuable for many UDS topics.
  6. UDS only happens once per cycle, but design and planning need to continue throughout the cycle. We can’t design everything up front, and there is a lot of information we don’t have at the beginning. We should aim to use our time at UDS to greatest effect, but also make time to continue this work during the development cycle. “design a little, build a little, test a little, fly a little”

Proposals

  1. Concentrate on the projects we can complete in the upcoming cycle. If we aren’t going to have time to implement something until the next cycle, the blueprint can usually be deferred to the next cycle as well. By producing only moderately more blueprints than we need, we can reduce the complexity of the event, avoid waste, prepare better, and put most of our energy into the blueprints we intend to use in the near future.
  2. Group related sessions into clusters, and work on them together, with a similar group of people. By switching context less often, we can more easily stay engaged, get less fatigued, and make meaningful connections between related topics.
  3. Organize for cross-team participation, rather than dividing teams into tracks. A given session may relate to a Desktop Edition feature, but depends on contributions from more than just the Desktop Team. There is a lot of design going on at UDS outside of the “Design” track. By working together to achieve common goals, we can more easily anticipate problems, benefit from diverse points of view, and help each other more throughout the cycle.
  4. Build in opportunities to work on deeper problems, during longer blocks of time. As a platform, Ubuntu exists within a complex ecosystem, and we need to spend time together understanding where we are and where we are going. As a project, we have grown rapidly, and need to regularly evaluate how we are working and how we can improve. This means considering more than just blueprints, and sometimes taking more than an hour to cover a topic.

Syndicated 2010-05-27 16:15:09 from We'll see | Matt Zimmerman

Using Mumble with a bluetooth headset

At Canonical, we’ve started experimenting with Mumble as an alternative to telephone calls for real-time conversations. The operating model is very much like IRC, based on channels within which everyone can hear everyone else as they speak.

Mumble works best with a headset, which offers better audio recording quality due to the proximity of the microphone, and avoids problems with echo and feedback. I like to pace around while I talk, and so I’ve already invested in a Plantronics Calisto Pro, which includes a DECT handset, a Bluetooth headset and a nice charging base. My laptop has bluetooth onboard, so I set about trying to get Mumble set up to use the headset via bluetooth.

The first thing I tried was to click on the bluetooth icon on the panel, and select Set up new device.... After setting the headset to pairing mode, I waited quite some time for it to show up in the list, but it never did. After opening the preferences dialog, I discovered that this was (presumably) because I had already paired it, ages ago.

So, I went about trying to get PulseAudio to talk to it. After some hunting, I tried:


pactl load-module module-bluetooth-device address=00:23:xx:xx:xx:xx

This created a new Card in PulseAudio, which I could see in the Hardware tab of the Sound Preferences dialog and in pactl list, but it was inactive:


Card #1
Name: bluez_card.00_23_XX_XX_XX_XX
Driver: module-bluetooth-device.c
Owner Module: 17
Properties:
device.description = "Calisto PLT"
device.string = "00:23:XX:XX:XX:XX"
device.api = "bluez"
device.class = "sound"
device.bus = "bluetooth"
device.form_factor = "headset"
bluez.path = "/org/bluez/13899/hci0/dev_00_23_XX_XX_XX_XX"
bluez.class = "0x200404"
bluez.name = "Calisto PLT"
device.icon_name = "audio-headset-bluetooth"
Profiles:
hsp: Telephony Duplex (HSP/HFP) (sinks: 1, sources: 1, priority. 20)
off: Off (sinks: 0, sources: 0, priority. 0)
Active Profile: off

The Hardware tab confirmed that the device was Disabled, and using the “Off” profile. I could manually select the “Telephony Duplex (HSP/HFP)” profile, but this had no apparent effect. There were no Sources or Sinks to send or receive audio data to or from the headset (and thus nothing new in the Input or Output tabs of the preferences dialog). syslog hinted:


pulseaudio[15239]: module-bluetooth-device.c: Default profile not connected, selecting off profile

At this point, I recalled that when I suspend this laptop, the bluetooth driver gets unhappy. I don’t commonly use bluetooth on this laptop, so I hypothesized that the driver was in a weird state, and I decided to try unloading and reloading the btusb module. Once I did so, the device showed up in the panel menu, with a “Connect” menu item. Aha! The manual module loading above may turn out to be unnecessary if the device shows up in the menu initially.

I selected the Connect menu item, and a bunch of magic happened, with the result that I heard the headset’s tone to indicate it was activated. Sound Preferences now showed it under Hardware as active using the Telephony Duplex profile, and it appeared under Input and Output as well. pactl list showed its sources and sinks. Mumble offered it as a choice for the input and output device. Progress!

Some experimentation (thanks, Colin) revealed that other people could hear me through the headset just fine.

However (Problem #1), I couldn’t hear them clearly through the headset. If I switch Mumble’s output to my speakers, they sound fine, so it’s not Mumble. So I tried:


paplay -d bluez_sink.00_23_XX_XX_XX_XX /usr/share/sounds/alsa/Front_Center.wav

…which also sounds awful. There is a whining noise, which gets louder when the audio signal is louder, which makes it very difficult to hear. I don’t know if the problem is with PulseAudio, bluez, the kernel, or the device, but using speakers for output is a workaround. This does seem to cause some echo, though, so I’ll need to track this down eventually.

Problem #2 is that Mumble seems to prevent PulseAudio from suspending the headset’s source. Even if I set it to “Push to Talk” mode, the headset stays active all the time, which will drain the battery. PulseAudio seems to do the right thing, and kill the radio link, if the source is left idle, and brings it up when there is activity, so this looks to be Mumble’s fault. I’ll need to fix this as well.


Syndicated 2010-05-26 12:58:51 from We'll see | Matt Zimmerman

The behavioral economics of free software

People who use and promote free software cite various reasons for their choice, but do those reasons tell the whole story? If, as a community, we want free software to continue to grow in popularity, especially in the mainstream, we should understand better the true reasons for choosing it—especially our own.

Some believe that it offers higher quality, that the availability of source code results in a better product with higher reliability. Although it’s difficult to do an apples-to-apples comparison of software, there are certainly instances where free software components have been judged superior to their proprietary counterparts. I’m not aware of any comprehensive analysis of the general case, though, and there is plenty of anecdotal evidence on both sides of the debate.

Others prefer it for humanitarian reasons, because it’s better for society or brings us closer to the world we want to live in. These are more difficult to analyze objectively, as they are closely linked to the individual, their circumstances and their belief system.

For developers, a popular reason is the possibility of modifying the software to suit their needs, as enshrined in the Free Software Foundation’s freedom 1. This is reasonable enough, though the practical value of this opportunity will vary greatly depending on the software and circumstances.

The list goes on: cost savings, educational benefits, universal availability, social rewards, etc.

The wealth of evidence of cognitive bias indicates that we should not take these preferences at face value. Not only are human choices seldom rational, they are rarely well understood even by the human themselves. When asked to explain our preferences, we often have a ready answer—indeed, we may never run out of reasons—but they may not withstand analysis. We have many different ways of fooling ourselves with regard to our own past decisions and held beliefs, as well as those of others.

Behavioral economics explores the way in which our irrational behavior affects economies, and the results are curious and subtle. For example, the riddle of experience versus memory (TED video), or the several examples in “The Marketplace of Perception” (Harvard Magazine article). I think it would be illuminating to examine free software through this lens, and consider that the vagaries of human perception may have a very strong influence on our choices.

Some questions for thought:

  • Does using free software make us happier? If so, why? If not, why do we use it anyway?
  • Do we believe in free software because we have a great experience using it, or because we feel good about having used it? (Daniel Kahneman explains the difference)
  • Why do we want other people to use free software? Is it only because we want them to share our preference, or because we will benefit ourselves, or do we believe they will appreciate it for their own reasons?

If you’re aware of any studies along these lines, I would be interested to read about them.


Syndicated 2010-05-25 10:42:19 from We'll see | Matt Zimmerman

Ubuntu 10.10 (Maverick) Developer Summit

I spent last week at the Ubuntu Developer Summit in Belgium, where we kicked off the 10.10 development cycle.

Due to our time-boxed release cycle, not everything discussed here will necessarily appear in Ubuntu 10.10, but this should provide a reasonable overview of the direction we’re taking.

Presentations

While most of our time at UDS is spent in small group meetings to discuss specific topics, there are also a handful of presentations to convey key information and stimulate creative thinking.

A few of the more memorable ones for me were:

  • Mark Shuttleworth talked about the desktop, in particular the introduction of the new Unity shell for Ubuntu Netbook Edition
  • Fanny Chevalier presented Diffamation, a tool for visualizing and navigating the history of a document in a very flexible and intuitive way
  • Rick Spencer talked about the development process for 10.10 and some key changes in it, including a greater focus on meeting deadlines for freezes (and making fewer exceptions)
  • Stefano Zacchiroli, the current Debian project leader, gave an overview of how Ubuntu and Debian developers are working together today, and how this could be improved. He has posted a summary on the debian-project mailing list.

The talks were all recorded, though they may not all be online yet.

Foundations

The Foundations team provides essential infrastructure, tools, packages and processes which are central to the development of all Ubuntu products. They make it possible for the desktop and server teams to focus on their areas of expertise, building on a common base system and development procedures.

Highlights from their track:

Desktop

The desktop team manages both Desktop Edition and Netbook Edition, on a mission to provide a top-notch experience to users across a range of client computing devices.

Highlights from their track:

Server/Cloud

The server team is charging ahead with making Ubuntu the premier server OS for cloud computing environments.

Highlights from their track:

ARM

Kiko Reis gave a talk introducing ARM and the corresponding opportunity for Ubuntu. The ARM team ran a full track during the week on all aspects of their work, from the technical details of the kernel and toolchain, to the assembly of a complete port of Netbook Edition 10.10 for several ARM platforms.

Kernel

The kernel team provided essential input and support for the above efforts, and also held their own track where they selected 2.6.35 as their target version, agreed on a variety of changes to the Ubuntu kernel configuration, and created a plan for providing backports of newer kernels to LTS releases to support deployment on newer hardware.

Security

Like the kernel team, the security team provided valuable input into the technical plans being developed by other teams, and also organized a security track to tackle some key security topics such as clarifying the duration of maintenance for various collections of packages, and the ongoing development of AppArmor and Ubuntu’s AppArmor profiles.

QA

The QA team focuses on testing, test automation and bug management throughout the project. While quality is everyone’s responsibility, the QA team helps to coordinate these activities across different teams, establish common standards, and maintain shared infrastructure and tools.

Highlights from their track include:

Design

The design team organized a track at UDS for the first time this cycle, and team manager Ivanka Majic gave a presentation to help explain its purpose and scope.

Toward the end of the week, I joined in a round table discussion about some of the challenges faced by the team in engaging with the Ubuntu community and building support for their work. This is a landmark effort in mating professional design with free software community, and there is still much to learn about how to do this well.

Community

The community track discussed the usual line-up of events, outreach and advocacy programs, organizational tools, and governance housekeeping for the 10.10 cycle, as well as goals for improving the translation of Ubuntu and related resources into many languages.

One notable project is an initiative to aggressively review patches submitted to the bug tracker, to show our appreciation for these contributions by processing them more quickly and completely.


Syndicated 2010-05-17 12:01:10 from We'll see | Matt Zimmerman

A farewell to Facebook

Actually, on second thought, this isn’t a farewell. There is no goodwill on either side of this bitter parting. Facebook doesn’t care about my privacy or consent, and I don’t care about their shady and anti-competitive profiteering. It’s over between us.

So, here’s what I did:

  1. made a list of all of my connections on Facebook (via copy and paste from a browser window)
  2. confirmed that I have alternate contact information for all of them, either by email or other social media
  3. deleted all of my photo albums
  4. deleted all of the information from all sections of my profile
  5. removed all applications
  6. deleted everything under “My Links”
  7. deleted all of my notes (this was tedious, as my blog was syndicated there)
  8. followed the WikiHow instructions to delete my account

Obviously, I can’t really control whether they hold onto my data in spite of this. This should provide a clear statement of intent on my part, though, if it comes up in future shenanigans.


Syndicated 2010-05-16 12:52:15 from We'll see | Matt Zimmerman

Ten TED talks I took in today

Starting about a year ago, I started following the release of videos from TED events. If one looked interesting, I would download the video to watch later. In this way, I accumulated a substantial collection of talks which I never managed to watch.

I spent a Saturday evening working my way through the list. These are my favorites out of this batch.


Syndicated 2010-04-25 01:25:05 from We'll see | Matt Zimmerman

On Cloud

If you’re convinced that “cloud” is a useless buzzword, being used to describe everything under the sun, old and new, then…well, you’re right—that is, except for the “useless” part. It’s true that this word is being (ab)used in many different products, services and technologies, which do not seem to relate to each other in any concrete way. “Cloud” in the abstract is being defined in many different ways, based on different fundamental characteristics of “cloud technology”. Nonetheless, there is something genuinely important going on here, and this is my view of what it’s about.

The hype

Countless business which only had “web sites” or “web applications” in 2008 now call them “cloud services”. They aren’t delivering any new benefits to their users, and they haven’t redesigned their infrastructure. What do they mean by this?

Cloud is defined, some say, by where your data is kept, and “cloud” means storing your data on remote servers instead of internal ones. Millions of Gmail and Flickr users have been “in the cloud” without even knowing it, and so has everyone who reads their mail over IMAP, or uses voicemail. In short, cloud is just data that is somewhere else.

Credit: jurvetson

Last September, I watched a presentation on “Cloud computing” where Symantec pitched their anti-virus product as having something like three different “cloud computing” technologies. These, it turned out, all involved downloading files over the Internet. In this view, cloud is essentially about the network, derived from our habit of using a drawing of a cloud in our network diagrams for so many years. Wikipedia currently shares this view, as its Cloud computing article opens with “Cloud computing is Internet-based computing”. No Internet, no cloud.

Maybe cloud is just SaaS, and any service provided on a pay-per-use basis over the Internet is a “cloud” service. It would seem that cloud is about paying for things, especially on a “utility” basis. The cloud is computing by the hour; it’s something you buy.

Others maintain that “cloud” is just virtualization, so if you’re using KVM or VMWare, you’ve got a cloud already. In this view, cloud is about operating systems, and whether they’re running on real or virtual hardware. Cloud means deploying applications as virtual appliances, and creating new servers entirely in software. If your servers only have one OS on them, then they’re not cloud-ready.

If this vagueness and contradiction irritates you, then you’re not alone. It bugs me, and a lot of other people who are getting on with developing, using and providing technology and services. Cloud is not just a new name for these familiar technologies. In fact, it isn’t a technology at all. I don’t think it even makes sense to categorize technologies as “cloud” and “non-cloud”, though some may be more “cloudy” than others.

Nonetheless, there is some meaning and validity in each of these interpretations of cloud. So what is it?

A different perspective

Credit: TangYauHoong

Cloud is a transition, a trend, a paradigm shift. It isn’t something which exists or not: it is happening, and we won’t understand its essential nature until it’s over. Simon Wardley explains this, in presentation format, much better than I could in this blog post, so if you haven’t already, I suggest that you take 14 minutes of your time and go and watch him do his thing.

This change doesn’t have a clear beginning or an end. Early on, it was a disconnected set of ideas without any identity. We’re now somewhere in the middle of the bell curve, with enough insight to give it a name, and sufficient momentum to guess at what happens next. In the end, it will be absorbed into “the way things are”.

What’s going on?

So, what is actually changing? The key trends I see are:

  1. People and organizations are becoming more comfortable relying on resources which do not exist in any particular place. Rather than storing precious metals under our beds, we entrust our finances to banks, which store our data and provide us with services which let us “use” our “money”. In computing terms, we’re no longer enamored with keeping all of “our” programs and “our” data on “our” hard drive: as it turns out, it’s not very safe there after all, and in some ways we can actually exercise more control over programs and data “out there” than “in here”. We can no longer hoard our gold, or drive off would-be thieves with a pistol, but that wasn’t a productive way to spend our energies anyway. While we can’t put our hands on our money, we can know exactly what’s happening with it from minute to minute, and use it in a variety of ways at a moment’s notice.
  2. IT products are ceding ground to services. In many cases, we no longer need to build software, or even buy it: we can simply use it. As computing needs become better understood, and can be met by commodity products, it becomes more feasible to develop services to meet them on-demand. This is why we see a spike in acronyms ending in “aaS”. Simon explains the progression in detail in his talk, linked above.
  3. Hardware, software and data are being reorganized according to new models, in order to provide these services effectively. Architectural patterns, such as “hardware/OS/framework/application” are giving way to new ones, like “hardware/OS/IaaS/OS/PaaS/SaaS/Internet/web browser/OS/hardware”. These are still evolving, and the lines between them are blurry. It’s a bit early to say what the dominant patterns will be, but system, network, data and application architecture are all being transformed. No single technology or architecture defines cloud, but virtualization (at the infrastructure level) and the web (at the application level) both seem to resonate strongly with “cloudy” design patterns.

These trends are reinforcing and accelerating each other, driving information technologies and businesses in a common direction. That, in a nutshell, is what cloud is all about.

So what?

This transformation is disruptive in many different ways, but the angles which most interest me at the moment are:

  • Operating systems – Cloud seems to indicate further commoditization of operating systems. We will have more operating systems than ever before (thanks to virtualization and IaaS), but we probably won’t think about them as much (as in software appliances). I think that the cloud world will want operating systems which are free, standard and highly customizable, which is potentially a great opportunity for Ubuntu and other open systems.
  • Software freedom – As Eben Moglen and others have pointed out, this trend has significant consequences for the free software movement. As the shape of software changes, our principles of freedom must evolve as well. What does it mean to have the freedom to “run” a program in the cloud? To copy it? To change it? What other protections might people need in order to exercise these freedoms in the future?
  • DevOps – I see a strong resonance between cloud—which is blurring the lines between hardware and software, infrastructure and applications, network and computer—and DevOps, which is bringing together the people who currently work in these different categories. Cloud means that we can no longer afford to treat these elements separately, and must work together to find better approaches to developing and deploying software, knocking down barriers and mapping new territory. Fortunately, cloud also means developing powerful new tools which will power this revolution.
  • Clients – A majority of cloud related activity seems to be focused on what happens to back-end servers, but what about the computers we actually touch, on our desks, and in our pockets? How will they change? Will we end up reverting to a highly centralized computing model, where clients are strictly limited to front-end user interface processing (e.g. a web browser)? Will clients “join” the cloud and be providers of computing resources, not only consumers? What new types of devices will we need in order to make the most of cloud?

I’d like to explore some of these topics in future posts.


Syndicated 2010-04-22 14:00:04 from We'll see | Matt Zimmerman

8 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!