[Catalyst] FW: Clustering catalyst apps

Gert Burger gburger at mweb.co.za
Mon May 8 14:45:09 CEST 2006


Thanks for the reply, here are some of my comments on this:

Using round robin dns still means that if 50% of the servers are down,
50% of all queries will goto the broken machines. Which will piss of
half your customers.

I have looked at the High Availability systems that have been written
for linux and they provide doubles(Or more) of everything, from load
balancers to db servers. The issue I have with them are they require a
great deal of money in hardware to get running.

Anycase, back to my issue, How do websites like slashdot and amazon, all
which use perl, keep uptimes of close to 99.999% ?

And is it possible to get to that level with lots of crappy hardware?

Cheers

PS. Excuse me for meddling with the semi-impossible.

On Mon, 2006-05-08 at 13:30 +0100, Peter Edwards wrote:
> (I've put some more linebreaks in this time)
> 
> Set up the DNS for your application to map to multiple IP addresses, one
> each for however many web server machines you need. Run your perl apps on
> those.

> 
> Have a single database server machine with RAID mirrored disks. Have your
> perl apps connect to that.
> 
> Regularly backup your database across the net to a disaster recovery (DR)
> machine at a different physical location. With mysql you can do a hotcopy
> then rsync the files across. Set up the DR server so it can also be a web
> server.

> 
> Failures:
> 
> Disk - switch to mirror until you can replace the disk
> 
> Database host - switch your web apps to the DR server for database access;
> have an application strategy on what to do with delayed transactions that
> happened since the last database synchronisation [1]
> 
> Network/Datacentre - point DNS to DR server and use its web server (poor
> performance, but at least limited access is available)
> 
> Assuming you've got your servers in a data centre with triple connections to
> the Internet backbone, this last scenario is very unlikely.
> 
> A lot depends on how many users, how critical up-time is, what the cost
> equation is between having an alternative site and hardware versus the
> opportunity cost of lost sales and damaged reputation. The above works well
> for 10-150 concurrent users. For more you could consider using the
> clustering and failover features that come with some databases.
> 
> [1] For example, if you manage to recover the transaction log from the main
> db server you can merge the records in later provided your app hasn't
> allocated overlapping unique ids to its record keys.
> 
> Regards, Peter
> 
> -----Original Message-----
> From: catalyst-bounces at lists.rawmode.org
> [mailto:catalyst-bounces at lists.rawmode.org] On Behalf Of Gert Burger
> Sent: 08 May 2006 12:41
> To: The elegant MVC web framework
> Subject: [Catalyst] Clustering catalyst apps
> 
> Hi
> 
> I was wondering a few days ago how one would create a cluster of
> catalyst webapps?
> 
> Some of my early thoughts including just having multiple machines
> running apache with a load balancer.
> 
> But you then still have a single point of failure, at the load balancer.
> 
> 
> Another problem is, if you use some sort of database to store your
> sessions etc then you have another point of failure.
> 
> Therefore, how can a average small company improve their
> (Catalyst)webapps reliability without breaking the budget?
> 
> Gert Burger
> 
> 
> _______________________________________________
> Catalyst mailing list
> Catalyst at lists.rawmode.org
> http://lists.rawmode.org/mailman/listinfo/catalyst
> 
> 
> 
> _______________________________________________
> Catalyst mailing list
> Catalyst at lists.rawmode.org
> http://lists.rawmode.org/mailman/listinfo/catalyst




More information about the Catalyst mailing list