Older blog entries for mikal (starting at number 982)

Fast Food Nation




ISBN: 9780547750330
LibraryThing
I don't read a lot of non-fiction, but I decided to finally read this book having had it sit on the shelf for a few years. I'm glad I read it, but as someone who regularly eats in the US I am not sure if I should be glad or freaked out. The book is an interesting study in how industrialization without proper quality controls can have some pretty terrible side effects. I'm glad to live in a jurisdiction where we actively test for food quality and safety.

The book is a good read, and I'd recommend it to people without weak stomaches.

Tags for this post: book eric_schlosser food quality meat fast industrialized
Related posts: Dinner; Dishwasher Trout; Yum; 14 November 2003; Food recommendation; Generally poor audio quality on pod casts?


Comment

Syndicated 2014-11-16 01:43:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Specs for Kilo

Here's an updated list of the specs currently proposed for Kilo. I wanted to produce this before I start travelling for the summit in the next couple of days because I think many of these will be required reading for the Nova track at the summit.

API

  • Add instance administrative lock status to the instance detail results: review 127139 (abandoned).
  • Add more detailed network information to the metadata server: review 85673.
  • Add separated policy rule for each v2.1 api: review 127863.
  • Add user limits to the limits API (as well as project limits): review 127094.
  • Allow all printable characters in resource names: review 126696.
  • Expose the lock status of an instance as a queryable item: review 85928 (approved).
  • Implement instance tagging: review 127281 (fast tracked, approved).
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
  • Implement the v2.1 API: review 126452 (fast tracked, approved).
  • Microversion support: review 127127.
  • Move policy validation to just the API layer: review 127160.
  • Provide a policy statement on the goals of our API policies: review 128560.
  • Support X509 keypairs: review 105034.


Administrative

  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).


Containers Service



Hypervisor: Docker



Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.


Hypervisor: Hyper-V

  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).


Hypervisor: Ironic



Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).


Hypervisor: libvirt



Instance features



Internal

  • Move flavor data out of the system_metdata table in the SQL database: review 126620 (approved).
  • Transition Nova to using the Glance v2 API: review 84887.


Internationalization

  • Enable lazy translations of strings: review 126717 (fast tracked).


Performance

  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


Scheduler

  • Add an IOPS weigher: review 127123 (approved).
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Move select_destinations() to using a request object: review 127612.


Security

  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.


Tags for this post: openstack kilo blueprint spec
Related posts: One week of Nova Kilo specifications; Compute Kilo specs are open; On layers; Juno nova mid-cycle meetup summary: slots; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration

Comment

Syndicated 2014-10-23 19:27:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

One week of Nova Kilo specifications

Its been one week of specifications for Nova in Kilo. What are we seeing proposed so far? Here's a summary...

API



Administrative

  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705.


Containers Service



Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.


Hypervisor: Hyper-V

  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190.


Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (spec approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (spec approved).
  • Support the OVA image format: review 127054.


Hypervisor: libvirt

  • Add a new linuxbridge VIF type, macvtap: review 117465.
  • Add support for SMBFS as a image storage backend: review 103203.
  • Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979.
  • Support libvirt storage pools: review 126978.
  • Support quiesce filesystems during snapshot: review 126966.


Instance features

  • Allow direct access to LVM volumes if supported by Cinder: review 127318.


Interal

  • Move flavor data out of the system_metdata table in the SQL database: review 126620.


Internationalization



Scheduler

  • Add an IOPS weigher: review 127123 (spec approved).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530.
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Move select_destinations() to using a request object: review 127612.


Scheduling

  • Add instance count on the hypervisor as a weight: review 127871.


Security

  • Provide a reference implementation for console proxies that uses TLS: review 126958.
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.


Tags for this post: openstack kilo blueprints spec
Related posts: Compute Kilo specs are open; Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy

Comment

Syndicated 2014-10-13 03:27:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Compute Kilo specs are open

From my email last week on the topic:

I am pleased to announce that the specs process for nova in kilo is
now open. There are some tweaks to the previous process, so please
read this entire email before uploading your spec!

Blueprints approved in Juno
===========================

For specs approved in Juno, there is a fast track approval process for
Kilo. The steps to get your spec re-approved are:

 - Copy your spec from the specs/juno/approved directory to the
specs/kilo/approved directory. Note that if we declared your spec to
be a "partial" implementation in Juno, it might be in the implemented
directory. This was rare however.
 - Update the spec to match the new template
 - Commit, with the "Previously-approved: juno" commit message tag
 - Upload using git review as normal

Reviewers will still do a full review of the spec, we are not offering
a rubber stamp of previously approved specs. However, we are requiring
only one +2 to merge these previously approved specs, so the process
should be a lot faster.

A note for core reviewers here -- please include a short note on why
you're doing a single +2 approval on the spec so future generations
remember why.

Trivial blueprints
==================

We are not requiring specs for trivial blueprints in Kilo. Instead,
create a blueprint in Launchpad
at https://blueprints.launchpad.net/nova/+addspec and target the
specification to Kilo. New, targeted, unapproved specs will be
reviewed in weekly nova meetings. If it is agreed they are indeed
trivial in the meeting, they will be approved.

Other proposals
===============

For other proposals, the process is the same as Juno... Propose a spec
review against the specs/kilo/approved directory and we'll review it
from there.


After a week I'm seeing something interesting. In Juno the specs process was new, and we saw a pause in the development cycle while people actually wrote down their designs before sending the code. This time around people know what to expect, and there are left over specs from Juno lying around. We're therefore seeing specs approved much faster than in Kilo. This should reduce the effect of the "pipeline flush" that we saw in Juno.

So far we have five approved specs after only a week.

Tags for this post: openstack kilo blueprints spec
Related posts: Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler

Comment

Syndicated 2014-10-12 16:39:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Lock In




ISBN: 0765375869
LibraryThing
I know I like Scalzi stuff, but each series is so different that I like them all in different ways. I don't think he's written a murder mystery before, and this book was just as good as Old Man's War, which is a pretty high bar. This book revolves around a murder being investigated by someone who can only interact with the real world via personal androids. Its different from anything else I've seen, and a unique idea is pretty rare these days.

Highly recommended.

Tags for this post: book john_scalzi robot murder mystery
Related posts: Isaac Asimov's Robot Short Stories; Prelude To Foundation ; Isaac Asimov's Foundation Series; Caves of Steel; Robots and Empire ; A Talent for War


Comment

Syndicated 2014-10-08 02:43:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

On layers

There's been a lot of talk recently about what we should include in OpenStack and what is out of scope. This is interesting, in that many of us used to believe that we should do ''everything''. I think what's changed is that we're learning that solving all the problems in the world is hard, and that we need to re-focus on our core products. In this post I want to talk through the various "layers" proposals that have been made in the last month or so. Layers don't directly address what we should include in OpenStack or not, but they are a useful mechanism for trying to break up OpenStack into simpler to examine chunks, and I think that makes them useful in their own right.

I would address what I believe the scope of the OpenStack project should be, but I feel that it makes this post so long that no one will ever actually read it. Instead, I'll cover that in a later post in this series. For now, let's explore what people are proposing as a layering model for OpenStack.

What are layers?

Dean Troyer did a good job of describing a layers model for the OpenStack project on his blog quite a while ago. He proposed the following layers (this is a summary, you should really read his post):

  • layer 0: operating system and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova
  • layer 2: extended basics -- Neutron, Cinder, Swift, Ironic
  • layer 3: optional services -- Horizon and Ceilometer
  • layer 4: turtles all the way up -- Heat, Trove, Moniker / Designate, Marconi / Zaqar


Dean notes that Neutron would move to layer 1 when nova-network goes away and Neutron becomes required for all compute deployments. Dean's post was also over a year ago, so it misses services like Barbican that have appeared since then. Services are only allowed to require services from lower numbered layers, but can use services from higher number layers as optional add ins. So Nova for example can use Neutron, but cannot require it until it moves into layer 1. Similarly, there have been proposals to add Ceilometer as a dependency to schedule instances in Nova, and if we were to do that then we would need to move Ceilometer down to layer 1 as well. (I think doing that would be a mistake by the way, and have argued against it during at least two summits).

Sean Dague re-ignited this discussion with his own blog post relatively recently. Sean proposes new names for most of the layers, but the intent remains the same -- a compute-centric view of the services that are required to build a working OpenStack deployment. Sean and Dean's layer definitions are otherwise strongly aligned, and Sean notes that the probability of seeing something deployed at a given installation reduces as the layer count increases -- so for example Trove is way less commonly deployed than Nova, because the set of people who want a managed database as a service is smaller than the set of of people who just want to be able to boot instances.

Now, I'm not sure I agree with the compute centric nature of the two layers proposals mentioned so far. I see people installing just Swift to solve a storage problem, and I think that's a completely valid use of OpenStack and should be supported as a first class citizen. On the other hand, resolving my concern with the layers model there is trivial -- we just move Swift to layer 1.

What do layers give us?

Sean makes a good point about the complexity of OpenStack installs and how we scare away new users. I agree completely -- we show people our architecture diagrams which are deliberately confusing, and then we wonder why they're not impressed. I think we do it because we're proud of the scope of the thing we've built, but I think our audiences walk away thinking that we don't really know what problem we're trying to solve. Do I really need to deploy Horizon to have working compute? No of course not, but our architecture diagrams don't make that obvious. I gave a talk along these lines at pyconau, and I think as a community we need to be better at explaining to people what we're trying to do, while remembering that not everyone is as excited about writing a whole heap of cloud infrastructure code as we are. This is also why the OpenStack miniconf at linux.conf.au 2015 has pivoted from being a generic OpenStack chatfest to being something more solidly focussed on issues of interest to deployers -- we're just not great at talking to our users and we need to reboot the conversation at community conferences until its something which meets their needs.


We intend this diagram to amaze and confuse our victims


Agreeing on a set of layers gives us a framework within which to describe OpenStack to our users. It lets us communicate the services we think are basic and always required, versus those which are icing on the cake. It also let's us explain the dependency between projects better, and that helps deployers work out what order to deploy things in.

Do layers help us work out what OpenStack should focus on?

Sean's blog post then pivots and starts talking about the size of the OpenStack ecosystem -- or the "size of our tent" as he phrases it. While I agree that we need to shrink the number of projects we're working on at the moment, I feel that the blog post is missing a logical link between the previous layers discussion and the tent size conundrum. It feels to me that Sean wanted to propose that OpenStack focus on a specific set of layers, but didn't quite get there for whatever reason.

Next Monty Taylor had a go at furthering this conversation with his own blog post on the topic. Monty starts by making a very important point -- he (like all involved) both want the OpenStack community to be as inclusive as possible. I want lots of interesting people at the design summits, even if they don't work directly on projects that OpenStack ships. You can be a part of the OpenStack community without having our logo on your product.

A concrete example of including non-OpenStack projects in our wider community was visible at the Atlanta summit -- I know for a fact that there were software engineers at the summit who work on Google Compute Engine. I know this because I used to work with them at Google when I was a SRE there. I have no problem with people working on competing products being at our summits, as long as they are there to contribute meaningfully in the sessions, and not just take from us. It needs to be a two way street. Another concrete example is Ceph. I think Ceph is cool, and I'm completely fine with people using it as part of their OpenStack deploy. What upsets me is when people conflate Ceph with OpenStack. They are different. They're separate. And that is fine. Let's just not confuse people by saying Ceph is part of the OpenStack project -- it simply isn't because it doesn't fall under our governance model. Ceph is still a valued member of our community and more than welcome at our summits.

Do layers help us work our what to focus OpenStack on for now? I think they do. Should we simply say that we're only going to work on a single layer? Absolutely not. What we've tried to do up until now is have OpenStack be a single big thing, what we call "the integrated release". I think layers gives us a tool to find logical ways to break that thing up. Perhaps we need a smaller integrated release, but then continue with the other projects but on their own release cycles? Or perhaps they release at the same time, but we don't block the release of a layer 1 service on the basis of release critical bugs in a layer 4 service?

Is there consensus on what sits in each layer?

Looking at the posts I can find on this topic so far, I'd have to say the answer is no. We're close, but we're not aligned yet. For example, one proposal has a tweak to the previously proposed layer model that adds Cinder, Designate and Neutron down into layer 1 (basic services). The author argues that this is because stateless cloud isn't particularly useful to users of OpenStack. However, I think this is wrong to be honest. I can see that stateless cloud isn't super useful by itself, but we are assuming that OpenStack is the only piece of infrastructure that a given organization has. Perhaps that's true for the public cloud case, but the vast majority of OpenStack deployments at this point are private clouds. So, you're an existing IT organization and you're deploying OpenStack to increase the level of flexibility in compute resources. You don't need to deploy Cinder or Designate to do that. Let's take the storage case for a second -- our hypothetical IT organization probably already has some form of storage -- a SAN, or NFS appliances, or something like that. So stateful cloud is easy for them -- they just have their instances mount resources from those existing storage pools like they would any other machine. Eventually they'll decide that hand managing that is horrible and move to Cinder, but that's probably later once they've gotten through the initial baby step of deploying Nova, Glance and Keystone.

The first step to using layers to decide what we should focus on is to decide what is in each layer. I think the conversation needs to revolve around that for now, because it we drift off into whether existing in a given layer means you're voted off the OpenStack island, when we'll never even come up with a set of agreed layers.

Let's ignore tents for now

The size of the OpenStack "tent" is the metaphor being used at the moment for working out what to include in OpenStack. As I say above, I think we need to reach agreement on what is in each layer before we can move on to that very important conversation.

Conclusion

Given the focus of this post is the layers model, I want to stop introducing new concepts here for now. Instead let me summarize where I stand so far -- I think the layers model is useful. I also think the layers should be an inverted pyramid -- layer 1 should be as small as possible for example. This is because of the dependency model that the layers model proposes -- it is important to keep the list of things that a layer 2 service must use as small and coherent as possible. Another reason to keep the lower layers as small as possible is because each layer represents the smallest possible increment of an OpenStack deployment that we think is reasonable. We believe it is currently reasonable to deploy Nova without Cinder or Neutron for example.

Most importantly of all, having those incremental stages of OpenStack deployment gives us a framework we have been missing in talking to our deployers and users. It makes OpenStack less confusing to outsiders, as it gives them bite sized morsels to consume one at a time.

So here are the layers as I see them for now:

  • layer 0: operating system, and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova, and Swift
  • layer 2: extended basics -- Neutron, Cinder, and Ironic
  • layer 3: optional services -- Horizon, and Ceilometer
  • layer 4: application services -- Heat, Trove, Designate, and Zaqar


I am not saying that everything inside a single layer is required to be deployed simultaneously, but I do think its reasonable for Ceilometer to assume that Swift is installed and functioning. The big difference here between my view of layers and that of Dean, Sean and Monty is that I think that Swift is a layer 1 service -- it provides basic functionality that may be assumed to exist by services above it in the model.

I believe that when projects come to the Technical Committee requesting incubation or integration, they should specify what layer they see their project sitting at, and the justification for a lower layer number should be harder than that for a higher layer. So for example, we should be reasonably willing to accept proposals at layer 4, whilst we should be super concerned about the implications of adding another project at layer 1.

In the next post in this series I'll try to address the size of the OpenStack "tent", and what projects we should be focussing on.

Tags for this post: openstack kilo technical committee tc layers
Related posts: My candidacy for Kilo Compute PTL; Juno TC Candidacy; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic

Comment

Syndicated 2014-09-30 18:57:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

30 Sep 2014 (updated 30 Sep 2014 at 21:08 UTC) »

Blueprints implemented in Nova during Juno

As we get closer to releasing the RC1 of Nova for Juno, I've started collecting a list of all the blueprints we implemented in Juno. This was mostly done because it helps me write the release notes, but I am posting it here because I am sure that others will find it handy too.

Process



Ongoing behind the scenes work

Object conversion

Scheduler
  • Support sub-classing objects. launchpad specification
  • Stop using the scheduler run_instance method. Previously the scheduler would select a host, and then boot the instance. Instead, let the scheduler select hosts, but then return those so the caller boots the instance. This will make it easier to move the scheduler to being a generic service instead of being internal to nova. launchpad specification
  • Refactor the nova scheduler into being a library. This will make splitting the scheduler out into its own service later easier. launchpad specification
  • Move nova to using the v2 cinder API. launchpad specification
  • Move prep_resize to conductor in preparation for splitting out the scheduler. launchpad specification


API
  • Use JSON schema to strongly validate v3 API request bodies. Please note this work will later be released as v2.1 of the Nova API. launchpad specification
  • Provide a standard format for the output of the VM diagnostics call. This work will be exposed by a later version of the v2.1 API. launchpad specification
  • Move to the OpenStack standard name for the request id header, in a backward compatible manner. launchpad specification
  • Implement the v2.1 API on the V3 API code base. This work is not yet complete. launchpad specification


Other
  • Refactor the internal nova API to make the nova-network and neutron implementations more consistent. launchpad specification


General features

Instance features

Networking

Scheduling
  • Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. launchpad specification
  • Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. launchpad specification
  • Add support for host aggregates to scheduler filters. launchpad: disk; instances; and IO ops specification


Other
  • i18n Enablement for Nova, turn on the lazy translation support from Oslo i18n and updating Nova to adhere to the restrictions this adds to translatable strings. launchpad specification
  • Offload periodic task sql query load to a slave sql server if one is configured. launchpad specification
  • Only update the status of a host in the sql database when the status changes, instead of every 60 seconds. launchpad specification
  • Include status information in API listings of hypervisor hosts. launchpad specification
  • Allow API callers to specify more than one status to filter by when listing services. launchpad specification
  • Add quota values to constrain the number and size of server groups a users can create. launchpad specification


Hypervisor driver specific

Hyper-V

Ironic

libvirt

vmware
  • Move the vmware driver to using the oslo vmware helper library. launchpad specification
  • Add support for network interface hot plugging to vmware. launchpad specification
  • Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. launchpad specification


Tags for this post: openstack juno blueprints implemented

Comment

Syndicated 2014-09-30 05:05:00 (Updated 2014-09-30 21:08:59) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

My candidacy for Kilo Compute PTL

This is mostly historical at this point, but I forgot to post it here when I emailed it a week or so ago. So, for future reference:

I'd like another term as Compute PTL, if you'll have me.

We live in interesting times. openstack has clearly gained a large
amount of mind share in the open cloud marketplace, with Nova being a
very commonly deployed component. Yet, we don't have a fantastic
container solution, which is our biggest feature gap at this point.
Worse -- we have a code base with a huge number of bugs filed against
it, an unreliable gate because of subtle bugs in our code and
interactions with other openstack code, and have a continued need to
add features to stay relevant. These are hard problems to solve.

Interestingly, I think the solution to these problems calls for a
social approach, much like I argued for in my Juno PTL candidacy
email. The problems we face aren't purely technical -- we need to work
out how to pay down our technical debt without blocking all new
features. We also need to ask for understanding and patience from
those feature authors as we try and improve the foundation they are
building on.

The specifications process we used in Juno helped with these problems,
but one of the things we've learned from the experiment is that we
don't require specifications for all changes. Let's take an approach
where trivial changes (no API changes, only one review to implement)
don't require a specification. There will of course sometimes be
variations on that rule if we discover something, but it means that
many micro-features will be unblocked.

In terms of technical debt, I don't personally believe that pulling
all hypervisor drivers out of Nova fixes the problems we face, it just
moves the technical debt to a different repository. However, we
clearly need to discuss the way forward at the summit, and come up
with some sort of plan. If we do something like this, then I am not
sure that the hypervisor driver interface is the right place to do
that work -- I'd rather see something closer to the hypervisor itself
so that the Nova business logic stays with Nova.

Kilo is also the release where we need to get the v2.1 API work done
now that we finally have a shared vision for how to progress. It took
us a long time to get to a good shared vision there, so we need to
ensure that we see that work through to the end.

We live in interesting times, but they're also exciting as well.


I have since been elected unopposed, so thanks for that!

Tags for this post: openstack kilo compute ptl
Related posts: Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Thoughts from the PTL; Havana Nova PTL elections; Expectations of core reviewers

Comment

Syndicated 2014-09-29 18:34:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

The Decline and Fall of IBM: End of an American Icon?




ISBN: 0990444422
LibraryThing
This book is quite readable, which surprises me for the relatively dry topic. Whilst obviously not everyone will agree with the author's thesis, it is clear that IBM hasn't been managed for long term success in a long time and there are a lot of very unhappy employees. The book is an interesting perspective on a complicated problem.

Tags for this post: book robert_cringely ibm corporate decline
Related posts: Phones; Your first computer?; Advertising inside the firewall; Corporate networks; Loyalty; Dead IBM DeveloperWorks


Comment

Syndicated 2014-09-26 00:39:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

973 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!