[Catalyst] FW: Clustering catalyst apps
Joe Landman
landman at scalableinformatics.com
Mon May 8 14:55:18 CEST 2006
Gert Burger wrote:
> Thanks for the reply, here are some of my comments on this:
>
> Using round robin dns still means that if 50% of the servers are down,
> 50% of all queries will goto the broken machines. Which will piss of
> half your customers.
Hmmm... with dns proxies like dnsmasq and friends, this should not be an
issue.
> I have looked at the High Availability systems that have been written
> for linux and they provide doubles(Or more) of everything, from load
> balancers to db servers. The issue I have with them are they require a
> great deal of money in hardware to get running.
If you want highly available systems, this will cost you.
> Anycase, back to my issue, How do websites like slashdot and amazon, all
> which use perl, keep uptimes of close to 99.999% ?
Designs with no single points of failure. Whether they are highly
available may be open to interpretation, but if you are going to stand
up a resource for use where the cost of being down (either economic or
equivalent cost) or the risk of unavailability is high, you are going to
want to make sure you have no single points of failure anywhere in your
process.
>
> And is it possible to get to that level with lots of crappy hardware?
Heh. No.
Crappy hardware is as its name implies.
If you want highly reliable stuff, you are going to need to purchase
non-crappy hardware. This doesn't mean expensive hardware, just don't
buy the obvious crap. Lots of hardware out there is crappy. Dealing
with such hardware is a nightmare. Would cost you less to throw it away
in many cases and start with non-crappy hardware.
You need to design with the thought that single or multiple failures
will not take down everything. Also, you need to design for active
monitoring, simple start/stop mechanisms, and related.
A nice DB system is indicated, mysql/postgresql should be fine. We use
SQLite3 for some of our stuff and shuttle the DB around, as it is small
enough for us to do this with.
Joe
>
> Cheers
>
> PS. Excuse me for meddling with the semi-impossible.
>
> On Mon, 2006-05-08 at 13:30 +0100, Peter Edwards wrote:
>> (I've put some more linebreaks in this time)
>>
>> Set up the DNS for your application to map to multiple IP addresses, one
>> each for however many web server machines you need. Run your perl apps on
>> those.
>
>> Have a single database server machine with RAID mirrored disks. Have your
>> perl apps connect to that.
>>
>> Regularly backup your database across the net to a disaster recovery (DR)
>> machine at a different physical location. With mysql you can do a hotcopy
>> then rsync the files across. Set up the DR server so it can also be a web
>> server.
>
>> Failures:
>>
>> Disk - switch to mirror until you can replace the disk
>>
>> Database host - switch your web apps to the DR server for database access;
>> have an application strategy on what to do with delayed transactions that
>> happened since the last database synchronisation [1]
>>
>> Network/Datacentre - point DNS to DR server and use its web server (poor
>> performance, but at least limited access is available)
>>
>> Assuming you've got your servers in a data centre with triple connections to
>> the Internet backbone, this last scenario is very unlikely.
>>
>> A lot depends on how many users, how critical up-time is, what the cost
>> equation is between having an alternative site and hardware versus the
>> opportunity cost of lost sales and damaged reputation. The above works well
>> for 10-150 concurrent users. For more you could consider using the
>> clustering and failover features that come with some databases.
>>
>> [1] For example, if you manage to recover the transaction log from the main
>> db server you can merge the records in later provided your app hasn't
>> allocated overlapping unique ids to its record keys.
>>
>> Regards, Peter
>>
>> -----Original Message-----
>> From: catalyst-bounces at lists.rawmode.org
>> [mailto:catalyst-bounces at lists.rawmode.org] On Behalf Of Gert Burger
>> Sent: 08 May 2006 12:41
>> To: The elegant MVC web framework
>> Subject: [Catalyst] Clustering catalyst apps
>>
>> Hi
>>
>> I was wondering a few days ago how one would create a cluster of
>> catalyst webapps?
>>
>> Some of my early thoughts including just having multiple machines
>> running apache with a load balancer.
>>
>> But you then still have a single point of failure, at the load balancer.
>>
>>
>> Another problem is, if you use some sort of database to store your
>> sessions etc then you have another point of failure.
>>
>> Therefore, how can a average small company improve their
>> (Catalyst)webapps reliability without breaking the budget?
>>
>> Gert Burger
>>
>>
>> _______________________________________________
>> Catalyst mailing list
>> Catalyst at lists.rawmode.org
>> http://lists.rawmode.org/mailman/listinfo/catalyst
>>
>>
>>
>> _______________________________________________
>> Catalyst mailing list
>> Catalyst at lists.rawmode.org
>> http://lists.rawmode.org/mailman/listinfo/catalyst
>
>
> _______________________________________________
> Catalyst mailing list
> Catalyst at lists.rawmode.org
> http://lists.rawmode.org/mailman/listinfo/catalyst
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 734 786 8452
cell : +1 734 612 4615
More information about the Catalyst
mailing list