Jump to content
Froxlor Forum
  • 0

multi-server / cluster extension (request for comments)


chrisv

Question

Hello Froxies,

 

trying to take another stab at this long-standing issue. I'll first define what "multi-server" means (in my opinion), why one might want it, then discuss how it could be implemented. The purpose of this post is to achieve a consensus, so I can build a patch and get it accepted - I need that stuff, so I'm going to build it.

 

Definition of multi-server / cluster setup

 

A multi-server setup is any setup in which two or more physical or virtual machines (henceforth called "node") are used together to provide the hosting services (e.g. Web/FTP/Mail/...). Possible scenarios include:

  • using dedicated/separate nodes for mail or database
  • distributing customers onto multiple nodes
  • running froxlor itself on a dedicated node (so customer services won't interfere with it)
  • serving a domain from multiple nodes (high-availability / load distribution)

Possible use-cases

 

There are several reasons why one would want to split services across multiple nodes:

  1. Scaling
    Obviously, if you have too many customers / services for a single server, you need additional nodes. Installing discrete (unconnected) Froxlor instances on every server is (sometimes) an option, but hardly desirable. Or maybe you want to move a customer who needs more resources to a dedicated host, without changing a lot of configs in a lot of places. Having Froxlor in a central instance makes this possible.
     
  2. Robustness / Availability
    If you let customer run their own scripts, some scripts will misbehave. With the default Froxlor installation (one server which has the Froxlor DB, the customer DBs and everything else), one misbehaving script or query can bring the whole server down (by overloading it), including panel, mail and everything. Only advantage of this is, angry emails from customers won't bother you, as you won't be getting them anyway.
    In any other case, it is in many cases highly desirable to have separate nodes for certain services, especially mail and database.
     
  3. Flexibility
    Different nodes can have different configurations, for example some nodes might run older php versions to support legacy services, whereas others run the latest version. Or some customer needs a special configuration, additional packages, whatever.

Example scenarios

  1. Separate dabase servers for customers and Froxlor
    This is actually already addressed in pull request #237
     
  2. Froxlor & central services (mail, ftp) running on separate node
    Froxlor is running on the management node, whereas customer domains run on app node(s).
     
  3. Dedicated nodes for some customers
    This is actually not very different from #2, just with one customer per node
     
  4. Load balancing with multiple app servers for one domain
    This means that two or more nodes actually have the same configuration, just with different IPs. We can either add two A-records for these IPs in the DNS, or we can put a balancer in front (configuring the balancer itself is out of the scope of this document). Of course, in this case we must make sure that the customer directory is the same for all nodes. This can be achieved either by using a distributed file system, or by mounting the customer directory over NFS (see pull request #236).

Implementation

 

The main idea of the proposed implementation is that we split Froxlor into two parts:

  1. The panel itself (which runs on the central management node)
  2. An agent which runs on the applications nodes, and creates the necessary configuration

Thankfully, this is already the case with current Froxlor, as we have already the panel (running as CGI with Froxlor user) and the cron job, which runs as root. Furthermore, we need to have the same users and groups on every node, which can be achieved by configuring libnss-mysql to connect to the central Froxlor database?.

 

The only thing we need to add is some way to tell the local agents/cronjobs what to do, and (equally important) what not to do. For this, I would add the following:

  1. A new table panel_nodes which contains a list of the nodes we have in our cluster.
     
  2. A new table panel_node_tasks which contains the tasks (e.g. webserver config, mailserver config, traffic, ...) which should be executed on a given node
     
  3. A new foreign key node in panel_ipandports - with this, the webserver task can find out whether to generate config for a given domain (config is only generated for domains which have an ipandport matching the node)
     
  4. a command line switch --for-host which tells the froxlor_master_cronjob.php which host it is running on.

Possible improvements (in later stage)

  • At the moment, each node would require a live connection to the central Froxlor database. This could be solved by having an API which agents could call, but this does not help much as long as we need it for libnss-mysql anyway.
  • Use eventing instead of polling (don't run agent over cron, but trigger it when necessary): Triggering could be done via ssh, or by making the agent listen on a network socket (there was a solution for this once in Froxlor, but it got removed)

Relevant forum posts

Notes

  1. We can get rid of this connection if the libnss-extrausers patch is integrated in Froxlor, but that's a different topic

Any comments are welcome.

 

 

Link to comment
Share on other sites

8 answers to this question

Recommended Posts

Chrisv,

 

If you want, we can speak on the matter. I have been running a 6 tier HA LB cluster stack hosting magento installs for about 3 years now with froxlor. There are a few things I have found out over the years regarding this issue and they may come of use to you.

 

Chuck

Link to comment
Share on other sites

Hi,

 

I just started using Froxlor last week and I am running it with a small cluster, and it is working perfectly.  

The following may give an idea to the developers on how I set it up, so below is how I did it and is not to be used as instructions nor I am responsable if you break your configuration.

 

My current configuration uses debian(or based) and consists of: 1 x haproxy(load balancing), 4 x dedicated apache web servers, 1 x mysql dedicated server, 1 x NAS(storing all files), and now 1 x froxlor (now running as master with all services)

 

The only thing is that I have not added some control to the salves from the master, so I created a cron service on each slave to reload apache configurations automatically.

A good way to do it might be to add some kind of API callbacks to the slaves (froxlor-plugin folder running on apache accepting connections only from local network) to accept commands from the master.

I currently do this to sync e-mail accounts from froxlor to my main external email delivery server(postfix).

 

The main thing is to have the froxlor master(please use one with the largest storage) share it's folders to the slaves or in my case on the NAS, a share was created there and then mounted on the master and slaves. 

Change the paths in froxlor to use the paths of the share and create corresponding folders in the share:(or copy/mv current ones there)

Ex.: /shared/clients/, /shared/clients-ssl/, /shared/clients-logs/, /shared/clients-email, /shared/clients-deactivate, etc.

Also create a /shared/apache/sites so that froxlor can create the vhosts files in there.

Remember to set all paths in froxlor settings.

 

Now, one thing needed is a modification to cron_tasks.inc.http.10.apache.php to change ipaddress:port to wildcard(*):port

I changed the below lines in the section: public function createIpPort() {

 

FROM:
$ipport = '[' . $row_ipsandports['ip'] . ']:' . $row_ipsandports['port'];
 
TO:
// $ipport = '[' . $row_ipsandports['ip'] . ']:' . $row_ipsandports['port'];
$ipport = '[*]:' . $row_ipsandports['port'];
 
FROM:
$ipport = $row_ipsandports['ip'] . ':' . $row_ipsandports['port'];

 

TO:
// $ipport = $row_ipsandports['ip'] . ':' . $row_ipsandports['port'];
$ipport = '*:' . $row_ipsandports['port'];

 

Modify /etc/apache2/apache.conf (master and slaves) to now include this line at the bottom: IncludeOptional /shared/apache/sites/*.conf

Now all webservers should be able to see the vhosts you create with froxlor after they reload with the cron job created on each slave.

Make sure you install all required packages and modules on each slave, so they work the same.
 
Install postfix on all webservers and configure so that they relay email thru the froxlor master.
Froxlor and Roudcube webmail is crrently only accessilble on the master thru subdomains and routed there by haproxy.
 
There may be some file access issues with the share, depending on what software you use: NFS, SAMBA, etc.(fixable with correct user/group id)
 
I also made modifications to use opendkim that uses the same selector name on all domains(company._domainkey) with different key per domain. More info available if requested.
Link to comment
Share on other sites

Actually, after some consideration I think it can be done with less changes:

  • we don't need a table which says which task to run where, as froxlor_master_cronjob.php can already do that:
    scripts/froxlor_master_cronjob.php --tasks
    scripts/froxlor_master_cronjob.php --traffic
    ...
    
  • we therefore only need a mechanism which prevents generation of (webserver) config for IPs which are not bound to the host - and this is relatively easy

I created a proof-of-concept in this branch - so far Apache only. Any testers/comments welcome.

Some specific questions:

  1. If that path was followed, would it need a setting to turn it on/off? I'd argue there is no reason why one would ever want to generate config for an IP which is not bound to the host (Apache wouldn't even start with that), but there may be special cases I'm not aware of.
  2. Can anyone confirm (or deny) whether hostname --all-ip-addresses works on his platform? It works on Ubuntu, Debian and Centos, what about Gentoo?
Link to comment
Share on other sites

I really am a fan of this idea. I would like to see froxlor as a mulit-server panel. Many of my customers would benefit of this as well as me for cluster systems which would be much more easily to setup if you could configure froxlor to only manage certain services on a node.

 

For the implementation im not so sure if 3 additional tables are enough to bring real multi-server support. You more likely need to map every reseller, customer, ip and domain to the nodes it is setup.  

Link to comment
Share on other sites

You more likely need to map every reseller, customer, ip and domain to the nodes it is setup.  

 

I'm not sure why this would be necessary? Domains are linked to node(s) through ipandport, which would also link customer to node(s). Resellers can already be limited to IP, so they could be limited to node, too (by assigning appropriate IP).

Link to comment
Share on other sites

After having read some of the cronjob code, it looks like this feature would require touching almost all of the code in /scripts/. Since this is a good opportunity for a general cleanup in this area, how do you think about using an object-relational mapper for database access? IMHO (but I'm probably biased, as a long-time Python/Django dev) this could help to make the code smaller and more readable, also (with a good ORM) we become independent from the database (Froxlor on sqlite, finally!) and get stuff like migrations (DB structure upgrades) for free.

If used properly, this can also be helpful for a future API, as it is possible in many cases to map objects to a REST API without much effort. So, whats the general opinion on this?

Link to comment
Share on other sites

I'm not sure why this would be necessary? Domains are linked to node(s) through ipandport, which would also link customer to node(s). Resellers can already be limited to IP, so they could be limited to node, too (by assigning appropriate IP).

 

Let's say i want to use this for a loadbalance cluster with lvs. The VIP IP has to be configured on every node which means it is no longer linked to a specific node. 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.



×
×
  • Create New...