using Linux as a front-end controller for a SAN?

SilverNodashi

Expert Member
Joined
Oct 12, 2007
Messages
3,337
Hi all,

I hope someone can shed some light on this for me. Has anyone tried, or have experience with, setting up a Linux server to manage a few NAS devices and thus make them all visible to the clients as one large SAN?

Basically, I'm thinking it would be a good idea to combine the current NAS's we have into one large system (typically a SAN?) and then let the clients all connect to one server (for authentication, LUN control, etc), but then when they need to access their drives / devices / LUN's, they get redirected to the specific server directly. I'm also thinking it could be a good way to say some IP addresses, i.e. instead of giving each NAS a public IP, they could all have private IP's and then everyone just connects to the public IP if needed. The servers which will access the NAS's will be on the same physical LAN and will also be on the same private IP subnet to make it easier.

Does anyone know what I'm talking about?
 

MagicDude4Eva

Banned
Joined
Apr 2, 2008
Messages
6,479
Hi all,

I hope someone can shed some light on this for me. Has anyone tried, or have experience with, setting up a Linux server to manage a few NAS devices and thus make them all visible to the clients as one large SAN?

Basically, I'm thinking it would be a good idea to combine the current NAS's we have into one large system (typically a SAN?) and then let the clients all connect to one server (for authentication, LUN control, etc), but then when they need to access their drives / devices / LUN's, they get redirected to the specific server directly. I'm also thinking it could be a good way to say some IP addresses, i.e. instead of giving each NAS a public IP, they could all have private IP's and then everyone just connects to the public IP if needed. The servers which will access the NAS's will be on the same physical LAN and will also be on the same private IP subnet to make it easier.

Does anyone know what I'm talking about?

Are you looking at making the storage accessible for different clients (i.e. customers) or when you talk about clients you mean servers? We are looking at Gluster.com which allows you to make storage available on the Posix layer in a clustered filesystem fashion. If this is your requirement, Gluster.com would fulfill your requirement as each client/server will be able to mount the filesystem you assign. I heard today that the latest version of Ubuntu provides native Gluster support as well. Sofar our experience has been good and we will take the solution live soon.
 

SilverNodashi

Expert Member
Joined
Oct 12, 2007
Messages
3,337
The clients are generally virtual private servers, but I could also share this storage with dedicated servers for backup purposes.

I had a look @ glusterFS but my understanding is that it can combine say 5 servers into a "network RAID" setup. But, what I don't know is this: does all the clients connect to the master server, or do they connect to the specific server where the data is? Imagine a NAS with say 10TB storage which needs to be accessed by a few hundred VM's. One single server even with 10GB NIC's would be a single bottle neck. I'm trying to steer away from this. A VM with 100GB data would only need to access server6 (for example) where his data is actually stored, and not the management / master server.

Ideally, I want to setup a scalable SAN, with reliability in mind on a RAID 6 / RAID10 concept, but need each client to only connect to the relevant server where his data is stored.
 

koffiejunkie

Executive Member
Joined
Aug 23, 2004
Messages
9,588
I haven't tried this, but my experience working with complex setups has tought me one thing: The more complex it gets, the easier it breaks, and the harder it is to fix. And that is true of anything to do with clustering. You need to make sure that you're wanting it for the right reasons before going to down the cluster route.

Either ways, IMHO, just get a real SAN.
 

koffiejunkie

Executive Member
Joined
Aug 23, 2004
Messages
9,588
Sorry, I didn't mean to sound arsey - poor choice of words. I meant dedicated SAN kit instead of relying on a patched-together setup like what you're describing. Have a look at http://southafrica.emc.com/products/category/storage.htm

I think we use the Clariion kit with Powerpath software. Two HBA cards in each server, each one connected to two storage controllers (two controllers per node) so four individual paths to the data - it just doesn't fail. Performance is stellar too.
 

SilverNodashi

Expert Member
Joined
Oct 12, 2007
Messages
3,337
The thing is, I don't like vendor lockins - been threre, done that, got he fried drives ;)

We used EMC previously and it worked well, to a point where if it broke, it went down in flames. And then you need to rely on their expensive (once the service contract expires) support staff to try and fix stuff which isn't in production anymore and you're force to buy a whole new system just to get the data back. Oh, and you can't just pop the drives into another machine if you urgently need the data off it. You need to get one of their compatible units.

So, I want to steer away from that. Gluster (and GlusterFS) seems todo what I want and would allow me to use my own hardware, like SuperMicro / Dell servers and runs on Linux which would make it so much easier to fix when there's a problem.
 

koffiejunkie

Executive Member
Joined
Aug 23, 2004
Messages
9,588
Surely you should have two of everything to start with if you're putting your data on it?

Anyway, if you go down the Gluster route, make sure you have experience fixing that before you put customers data on it. I've had to work on broken Gluster before, and I've had to work on broken DRDB before - it's no fun and it's pretty easy to lose your data. There isn't a lot of good information on what to do and what not to do on the internet either.
 

SilverNodashi

Expert Member
Joined
Oct 12, 2007
Messages
3,337
Surely you should have two of everything to start with if you're putting your data on it?

Obviously, but I tend to let stuff run long after it's EOF time. My oldest PC in "production" is a Pentium Pro 166Mhz + 4GB HDD + 48MB SIMM RAM which still runs like a dream. I got it without a CPU heatsink about 10 years ago and had to cut a normal heatsink that I brought from G.T. Electronics in Boksburg & fit a 120mm fan onto (that's how big the Socket 8 CPU's are). The problem is, most vendors want, or rather force, you to buying new equipment every 3 to 5 years.

Anyway, if you go down the Gluster route, make sure you have experience fixing that before you put customers data on it. I've had to work on broken Gluster before, and I've had to work on broken DRDB before - it's no fun and it's pretty easy to lose your data. There isn't a lot of good information on what to do and what not to do on the internet either.

Yea, I'm going todo some throurough testing before I put anything into production. Now just to get some servers out of production which I can use in the lab for a while.....
 

ponder

Honorary Master
Joined
Jan 22, 2005
Messages
92,823
If you know anybody at tsunaminet.co.za maybe give them a buz as they are running gluster for a bunch of storage related stuff.
 

MagicDude4Eva

Banned
Joined
Apr 2, 2008
Messages
6,479
So, I want to steer away from that. Gluster (and GlusterFS) seems todo what I want and would allow me to use my own hardware, like SuperMicro / Dell servers and runs on Linux which would make it so much easier to fix when there's a problem.

Not quite sure what version of GlusterFS this was, but the current version provides self-healing and striping across bricks. We just put 2 Gluster Nodes into production and aside from some performance tweaks we need to do, everything seems to work fine. DR testing was alright and taking down bricks during runtime functions quite well. There are quite a few installations in SA - I guess that data-corruption can happen anyware and RAID + backups are essential.
 
Top