Category: Infrastructure

Expanding glusterfs volumes [1112]

Once you have set up a glusterfs volume, you might want to expand the volume to add storage. This is an astoundingly easy task.

The first thing that you’ll want to do is to add in bricks. Bricks are similar to physical volumes a la LVM. The thing to bear in mind is that depending on what type of cluster you have (replicated / striped), you will need to add a certain number of blocks at a time.

Once you have a initialised the nodes, to add in a set of bricks, you need the following command which adds two more bricks to a cluster which keeps two replicas.

$ gluster volume add-brick testvol cserver3:/gdata cserver4:/gdata

Once you have done this, you will need to rebalance the cluster, which involves redistributing the files across all the bricks. There are two steps to this process, the “fixing” of the layout changes and the rebalancing of the data itself. You can perform both tasks together.

Read more »

My Thoughts on OCFS2 / Understanding OCFS2 [1110]

As mentioned earlier, we have been considered networked filesystems instead of NFS to introduce into a number of complex environments. OCFS2 was one of the first candidates.

In fact, we also considered GFS2 but looking around on the net, there seemed to be a general consensus recommending ocfs2 over gfs2.

Ubuntu makes it pretty easy to install and manage ocfs2 clusters. You just need to install ocfs2-tools and ocfs2console. You can then use the console to manage the cluster.

What I totally missed in all of my research and understanding, and due to lack of in depth knowledge on clustered filesystems was that OCFS2 (and GFS2 for that matter) are shared disk file systems.

What does this mean?

Read more »

Exporting X11 to Windows [1109]

Playing Skyrim the last week, sometimes I just missed Linux so terribly that I wanted a piece of it and not just the command line version. I wanted X Windows on my Windows 7.

There has been a solution for this for several years and the first time I did this, I installed cygwin with X11 but there is a far simpler way to accomplish this.

Install XMing. I then used putty, which has the forward X11 option. Once logged in, running xeyes shows the window exported onto my Windows 7. Ah.. so much better.

I actually used this to run terminator to connect to a number of servers. Over local LAN, the windows didn’t have any perceptible lag or delay. It was more or less like running it locally.

It is possible to set up shortcuts to run an application through putty and have it exported to your desktop. I haven’t played with this enough to comment though.

This of course only worked because I have another box which is running Linux. If that is not the case for you, then you might want to try VirtualBox but since the linux kernel developers have described the kernel modules as tainted crap, you might want to consider vmware instead which is an excellent product.

GlusterFS HOWTO [1108]

So, I  am catching up a bit on the technical documentation. A week taken to play Skyrim combined with various other bits and pieces made this a little difficult.

On the bright side, there are a few new things that have been worked on so hopefully plenty of things to cover soon.

We manage a number of servers and all over the place and all of them require to be backed up. We also have a number of desktops all with mirrored disks also getting backed up.

I like things to be all nicely efficient and its annoying when one server / desktop runs out of space when another two (or ten) has plenty of space. We grew to dislike NFS particularly due to the single point of failure and there were few other options.

We had tried glusterfs a few years ago (think it was at version 1.3 or something) and there were various issues particularly around small files and configuration was an absolute nightmare.

With high hopes that version 3.2 was exactly what we were looking for, we set up three basic machines for testing

Read more »

Making Twitter Faster

From my perspective, Twitter has a really really interesting technical problem to solve. How to store and retrieve a large amount of data really really quickly.

I am making some assumptions based on how I see twitter working. I have little information about how it is architected apart from some posts that suggests that it is running ruby on rails with MySQL?

Twitter is in the rare category where there is a very large number of data being added. There should be no updates (except to user information but there should be relatively very small amount of that). There is no need for transactionality. If I guess right, it should be a large amount of inserts and selects.

While a relational database is probably the only viable choice for the time being, I think that twitter can scale and perform better if all the extra bits of a relational database system was removed.

I love challenges like this. Technical ones are easier 😉

If I didn’t have a lifetime job, I would prototype this in a bit more depth. Garry pointed me in the direction of Hadoop. Having had a quick look at it, it can take care of the infrastructure, clustering and massive horizontal scaling requirements.

Now for the data layer on top. How to store and retrieve the data. HBase is probably a good option but doing it manually should be fairly straightforward too.

From my limited understanding of twitter, there are two key pieces of functionality, the timelines and search.

The timelines can be solved by storing each tweet as a file within a directory structure. My tweets would go into

/w/o/r/d/s/o/n/s/a/n/d/

The filename would be -

For the public timeline, you just have a similar folder structure, but with the timestamp, for example, the timestamp 1236158897 would go into the following structure as a symlink

/1/2/3/6/1/5/8/8/9/7/

For search, pick up each word in the tweet and pop the tweet as a symlink into that folder. You could have a folder per word or follow the structure above.

/t/w/i/t/t/e/r/- OR

twitter/-

You would then have an application running on top with a distributed cache with an API to ease access into the data easier than direct file access. Running on Linux, the kernel will take care of the large part of the automatic caching and buffering as long as there is enough RAM on the box.

This can in theory be done without Hadoop in between and separating the directory structures across multiple servers but that can have complications of its own, especially with adding and removing boxes for scalability.

You are also likely to run into issues with the number of files / sub-directories limits but they can be solved by ‘archiving’ – multiple options for that too…

Thinking about this problem brought me back to the good old days of working on the search mechanism within megabus.com. We needed the site to deal with a large number of searches on limited hardware when the project was still classified as a pilot.

With some hard work and experimentation, we were able to reduce the search time to a tenth of the original time.

I’ll admit that I don’t know the details or the intricacies of the requirements that twitter has. I have probably over-simplified the problem but it was still fun to think about. If you can think of problems with this – let me know; I wanna turn them into opportunities 😉

WordPress Themes