I like conversations with people who challenge my views on things. If I’m right then I should be able to marshal my arguments well enough, and if I’m not then clearly I need to learn and change my ideas.
So this evening I had a long chat with Barb Goldworm and she made me think long and hard about a couple of things.
First, InfiniBand. I kind of wrote this off in the same bucket as FCoE because in my view storage is moving towards TCP/IP, and the problems with IP connected storage are probably easier to solve than the problems of switching to yet another new technology. But is that right? Taken to its logical conclusion, every drive would have an embedded NIC and would connect to a backbone with a storage processor directing IO to the disks in RAID and other survivability configurations, or potentially servers directing IO direct to individual disks if advanced features are not required. Logically, this would probably work better with InfiniBand than with IP and quite likely with better performance, scalability and reliability. I’m thinking here about the day when comms between the storage processor (already often an Intel-based server appliance at least in midrange) and the storage itself evolves away from fibre. Fibre has a lot going for it but it is expensive and arcane; “everyone understands IP” in the way that “everyone understands Windows” (including the downsides such as naive and half-assed designs). Could InfiniBand really be the Next Big Thing? Or is it, as I had previously supposed given its decade and more of gestation, a niche technology outpaced by the more agile and more cheaply scalable IP? I will have to look harder at this.
The Microsoft analogy brings me to the second thing. I said already that VMware might become Novell 2.0; I am more convinced of this every time I hear that Paul Maritz is opposed to embracing other hypervisors and doubly so every time I have the conversation with anyone outside VMware, since everyone I’ve spoken to outside VMware basically agrees with me on this. What’s the implication,though, for my favourite hypervisor? Logically if we can choose to use KVM, Xen, ESXi or HyperV, won’t the Microsoft shops all vote for HyperV, or at least do as I would like to do with my infrastructure and split the load etween hypervisors? In my case I want RHEV / KVM on some hosts for operational (read: licensing) benefits, but given the pervasive nature of Windows and the fact that I already by DataCenter Edition licenses for my ESX hosts to cover the guests, surely if vCenter and its attendant (and usually very good) ecosystem of plugins and extensions also covered HyperV, why would I use ESXi instead? I think ESXi is the best hypervisor today but I have not benchmarked 4.1 against R2 SP1 – I am told that the improved memory management in SP1 now places it on an equal footing. Maybe it does.And everybody understands Windows, right? (No, very wrong, actually, ask any Windows techie to install R2 Server Core and manage it, see if they really understand Windows, but you get the idea).
So, some food for thought there and time to shift my opinion on some things, perhaps. Just when I thought I understood it, too. Oh well.