trying to take another stab at this long-standing issue. I'll first define what "multi-server" means (in my opinion), why one might want it, then discuss how it could be implemented. The purpose of this post is to achieve a consensus, so I can build a patch and get it accepted - I need that stuff, so I'm going to build it.
Definition of multi-server / cluster setup
A multi-server setup is any setup in which two or more physical or virtual machines (henceforth called "node") are used together to provide the hosting services (e.g. Web/FTP/Mail/...). Possible scenarios include:
using dedicated/separate nodes for mail or database
distributing customers onto multiple nodes
running froxlor itself on a dedicated node (so customer services won't interfere with it)
serving a domain from multiple nodes (high-availability / load distribution)
Possible use-cases
There are several reasons why one would want to split services across multiple nodes:
Scaling Obviously, if you have too many customers / services for a single server, you need additional nodes. Installing discrete (unconnected) Froxlor instances on every server is (sometimes) an option, but hardly desirable. Or maybe you want to move a customer who needs more resources to a dedicated host, without changing a lot of configs in a lot of places. Having Froxlor in a central instance makes this possible.
Robustness / Availability If you let customer run their own scripts, some scripts will misbehave. With the default Froxlor installation (one server which has the Froxlor DB, the customer DBs and everything else), one misbehaving script or query can bring the whole server down (by overloading it), including panel, mail and everything. Only advantage of this is, angry emails from customers won't bother you, as you won't be getting them anyway. In any other case, it is in many cases highly desirable to have separate nodes for certain services, especially mail and database.
Flexibility Different nodes can have different configurations, for example some nodes might run older php versions to support legacy services, whereas others run the latest version. Or some customer needs a special configuration, additional packages, whatever.
Example scenarios
Separate dabase servers for customers and Froxlor This is actually already addressed in pull request #237
Froxlor & central services (mail, ftp) running on separate node Froxlor is running on the management node, whereas customer domains run on app node(s).
Dedicated nodes for some customers This is actually not very different from #2, just with one customer per node
Load balancing with multiple app servers for one domain This means that two or more nodes actually have the same configuration, just with different IPs. We can either add two A-records for these IPs in the DNS, or we can put a balancer in front (configuring the balancer itself is out of the scope of this document). Of course, in this case we must make sure that the customer directory is the same for all nodes. This can be achieved either by using a distributed file system, or by mounting the customer directory over NFS (see pull request #236).
Implementation
The main idea of the proposed implementation is that we split Froxlor into two parts:
The panel itself (which runs on the central management node)
An agent which runs on the applications nodes, and creates the necessary configuration
Thankfully, this is already the case with current Froxlor, as we have already the panel (running as CGI with Froxlor user) and the cron job, which runs as root. Furthermore, we need to have the same users and groups on every node, which can be achieved by configuring libnss-mysql to connect to the central Froxlor database?.
The only thing we need to add is some way to tell the local agents/cronjobs what to do, and (equally important) what not to do. For this, I would add the following:
A new table panel_nodes which contains a list of the nodes we have in our cluster.
A new table panel_node_tasks which contains the tasks (e.g. webserver config, mailserver config, traffic, ...) which should be executed on a given node
A new foreign key node in panel_ipandports - with this, the webserver task can find out whether to generate config for a given domain (config is only generated for domains which have an ipandport matching the node)
a command line switch --for-host which tells the froxlor_master_cronjob.php which host it is running on.
Possible improvements (in later stage)
At the moment, each node would require a live connection to the central Froxlor database. This could be solved by having an API which agents could call, but this does not help much as long as we need it for libnss-mysql anyway.
Use eventing instead of polling (don't run agent over cron, but trigger it when necessary): Triggering could be done via ssh, or by making the agent listen on a network socket (there was a solution for this once in Froxlor, but it got removed)
Question
chrisv
Hello Froxies,
trying to take another stab at this long-standing issue. I'll first define what "multi-server" means (in my opinion), why one might want it, then discuss how it could be implemented. The purpose of this post is to achieve a consensus, so I can build a patch and get it accepted - I need that stuff, so I'm going to build it.
Definition of multi-server / cluster setup
A multi-server setup is any setup in which two or more physical or virtual machines (henceforth called "node") are used together to provide the hosting services (e.g. Web/FTP/Mail/...). Possible scenarios include:
Possible use-cases
There are several reasons why one would want to split services across multiple nodes:
Obviously, if you have too many customers / services for a single server, you need additional nodes. Installing discrete (unconnected) Froxlor instances on every server is (sometimes) an option, but hardly desirable. Or maybe you want to move a customer who needs more resources to a dedicated host, without changing a lot of configs in a lot of places. Having Froxlor in a central instance makes this possible.
If you let customer run their own scripts, some scripts will misbehave. With the default Froxlor installation (one server which has the Froxlor DB, the customer DBs and everything else), one misbehaving script or query can bring the whole server down (by overloading it), including panel, mail and everything. Only advantage of this is, angry emails from customers won't bother you, as you won't be getting them anyway.
In any other case, it is in many cases highly desirable to have separate nodes for certain services, especially mail and database.
Different nodes can have different configurations, for example some nodes might run older php versions to support legacy services, whereas others run the latest version. Or some customer needs a special configuration, additional packages, whatever.
Example scenarios
This is actually already addressed in pull request #237
Froxlor is running on the management node, whereas customer domains run on app node(s).
This is actually not very different from #2, just with one customer per node
This means that two or more nodes actually have the same configuration, just with different IPs. We can either add two A-records for these IPs in the DNS, or we can put a balancer in front (configuring the balancer itself is out of the scope of this document). Of course, in this case we must make sure that the customer directory is the same for all nodes. This can be achieved either by using a distributed file system, or by mounting the customer directory over NFS (see pull request #236).
Implementation
The main idea of the proposed implementation is that we split Froxlor into two parts:
Thankfully, this is already the case with current Froxlor, as we have already the panel (running as CGI with Froxlor user) and the cron job, which runs as root. Furthermore, we need to have the same users and groups on every node, which can be achieved by configuring libnss-mysql to connect to the central Froxlor database?.
The only thing we need to add is some way to tell the local agents/cronjobs what to do, and (equally important) what not to do. For this, I would add the following:
Possible improvements (in later stage)
Relevant forum posts
Notes
Any comments are welcome.
Link to comment
Share on other sites
8 answers to this question
Recommended Posts
Archived
This topic is now archived and is closed to further replies.