Distributed Caching Showdown – Memcached vs Velocity

In the red corner is Memcached (http://www.danga.com/memcached/) and the BeITMemcached .NET library (http://code.google.com/p/beitmemcached/ weighing in at £0 and all the way from geeky Unix-land.

In the blue corner is Velocity (http://msdn.microsoft.com/en-us/data/cc655792.aspx) also weighing in at £0 and from Redmond.

Distributed caching is a simple system… you have 1 or more machines which you use as a memory store, normally with key/value pairs. It’s not really complicated, so how do these two differ?


After 2 hours of messing around with Ubuntu and trying to get memcached to install I gave up and plumped for the seriously simple MemcachedManager (http://allegiance.chi-town.com/MemCacheDManager.aspx). You just tell it which Windows servers you want to use and it remotely installs the service for you. Can’t be any easier.

Velocity was about as difficult. Just install PowerShell V1.0 on the machine first, then run the Velocity installer and you’re
pretty much done. You need to run a few scripts (included in help file) to create a cache but it takes under 2 minutes.


No question here, Velocity has it licked. Memcached offers you, errr, ‘Put’ and ‘Get’ pretty much. Velocity gives you such lovelies as:

  • Cache Invalidation (things in SQL changing can expire the cache)
  • Cache Groups (so you can specify different policies for different types of data)
  • High Availability (you can use 3 or more servers to 100% ensure your data stays up)
  • Local Cache (for even more performance for data that can be stale)
  • Ability to use it to store Session data
  • 64bit version so no real limit on memory
  • …and a whole bundle more.


After installing Memcached and Velocity on the same pair of servers, I wrote a small app to compare the performance. It simply
writes/reads 1000 small strings, 1000 largeish XML strings and 1000 Integers to and from the cache. The results are as follows:



Velocity (With local-cache turned on)

Clearly memcached is faster than Velocity (unless you count the local cache option which is cheating!!!). Velocity seems less fussed about the long XML strings than memcached (5x slower on memcached but only 2x slower on Velocity to read them back!) but that could be the client library.

Working With Them

Both support a PUT/GET model that’s pretty identical:


Dim CacheFactory1 As DataCacheFactory = New DataCacheFactory() Dim myCache1 As DataCache = CacheFactory1.GetCache("test")

myCache1.Put("Author", "Brian")
Dim name As String = myCache1.Get("Author")


Dim objcache As BeIT.MemCached.MemcachedClient
objcache = (BeIT.MemCached.MemcachedClient.GetInstance("production"))

objcache.Set("Author", "Brian")
Dim name As String = objcache.Get("Author")

In use there’s very little in it, however there was something about Velocity I just couldn’t put my finger on. Pulling out the
network lead confirmed my hunch:

DOH! Something designed to give us scalibility, resilience etc fails spectacularly if it can’t find the hosts. The whole idea behind these things is that you can use spare memory on spare machines. Machines that may go down once in a while (or reboot for updates for example). Memcached fails much more gracefully and just returns ‘nothing’ after a short delay which is what you’d expect. I’m sure Velocity could be coded around, but for me, for now… using ‘spare’ machines, Memcached seems to be the way to go. I’ll code it behind a ‘layer’ so I can switch to Velocity (or something else) if we ever need something bigger than memcached.


One comment

  1. Pingback: viewmessages.com Architecture « Code, Life and Learning

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s