Jump to content
Froxlor Forum


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About OliverRahner

  • Rank
  1. Totally understandable, got it! Thanks for your patience.
  2. My current quick-and-very-dirty workaround looks like this: froxlor/scripts/jobs/cron_tasks.inc.http.30.nginx.php if ($domain['phpenabled_customer'] == 1 && $domain['phpenabled_vhost'] == '1') { $webroot_text .= "\t" . 'index index.php index.html index.htm;' . "\n"; if (!preg_match("/^##NO_TRY_FILES$/m", $domain['specialsettings'])) { $webroot_text .= "\t\t" . 'try_files $uri $uri/ @rewrites;' . "\n"; } else { // NO_TRY_FILES statement found, don't put try_files into config } } else { $webroot_text .= "\t" . 'index index.html index.htm;' . "\n"; } If I now put the statement "##NO_TRY_FILES" on a line by itself into the vhost special settings, no "try_files" will be generated in "location /".
  3. And the next issue on my way to nginx'iness. I want to run Seafile on one vhost. I added all the settings Seafile told me to in their manual (https://manual.seafile.com/deploy/deploy_with_nginx.html). Didn't work at first (got 404'ed after the initial redirect), but once I commented out either the "try_files" in "location /" or in "location @php" it started working. It seems that the =404 which is default in "location @php" is the issue. Even after reading through the source code's "mergeVhostCustom" magic I couldn't find a way to override the default try_files statement without patching my Froxlor installation... Any hints?
  4. The HTTP2 issue was something totally unrelated (namely that for some reason, I could only exec one php thread PER VHOST at the same time :-P), I mentioned it just for completeness. I just checked and at some point in the past, the official libnss config file changed which I didn't notice. That is why www-data was not a member of all the users' group. I updated the libnss config, and... tadaaa, everything works. To be honest, I ignored all hints I found online regarding misconfigured libnss, because my libnss seemed to work. I could do "id web2" etc. and get seemingly ok results. But is there any reason, why adding www-data to the users' groups is better than just accepting socket connections to php-fpm by www-data? Thanks for your help.
  5. Hi, because I had some issues with HTTP2 and php via fcgid under Apache I tried to switch to php_fpm. While I was at it, I noticed a problem that I couldn't pinpoint and decided to try nginx. The issue stayed the same, basically these log entries: nginx: connect() to unix:/var/lib/apache2/fastcgi/domainname.de-php-fpm.socket failed (13: Permission denied) while connecting to upstream, client: xx.xx.xx.xx, server: domainname.de, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://unix:/var/lib/apache2/fastcgi/domainname.de-php-fpm.socket:", host: "domainname.de", referrer: "https://domainname.de/" Apache: (13)Permission denied: [client xx.xx.xx.xx:63318] FastCGI: failed to connect to server "/var/www/php-fpm/web2/domainname.de/ssl-fpm.external": connect() failed The way I understand this problem: By design, php-fpm sockets created by Froxlor have permissions which only allow the vhost user to connect. But neither Apache nor nginx are told anywhere under which identity to connect to the socket. The SuExecUserGroup line in the vhost config file for Apache which does this for fcgid vanished when switching to php-fpm. I currently solved the problem by changing the line "listen.owner" inside the php-fpm pools to "www-data". That should not lower security, because php-fpm in itself takes care that the php process runs as the vhost user. Can someone tell me where I misunderstood the whole concept?
  6. Tut sie, das habe ich jetzt extra nochmal geprüft. Aber selbst wenn das nicht funktionieren würde, dürfte das dieses Problem nicht verursachen. Natürlich gäbe es dann kein neues Zertifikat, aber die Authorization wäre nicht mehr "pending" sondern "failed" und würde nicht zum Rate Limit beitragen. /EDIT Deine Anmerkung hat mir aber geholfen, das Problem zu finden. class.lescript.php prüft erst, ob der Token erreichbar ist. Wenn nicht, bricht das Script für sich ab, ohne den Prozess bei Let's Encrypt sauber zu beenden. Beenden heißt in dem Fall, den Challenge Request abzusenden, auch wenn das Script vorher schon weiß, dass er fehlschlägt. Das heißt letztendlich war wahrscheinlich sogar eine Fehlkonfiguration auf dem Server Schuld an dem Ganzen. Dadurch, dass die Authorizations offen blieben, führt ein Beheben des Problems nicht dazu, dass es sofort wieder läuft, sondern man muss erst darauf warten, dass die Authorizations ablaufen.
  7. Das stimmt leider nicht... urn:acme:error:rateLimited bedeutet, dass man eins von LetsEncrypt's Rate Limits getroffen hat (siehe https://letsencrypt.org/docs/rate-limits/). Auszug aus dem Link: Ich habe mir die Implementierung von Let's Encrypt in Froxlor noch nicht im Detail angeschaut. Aber irgendwo scheinen Challenges angefragt zu werden, die dann nicht validiert werden. Und wenn dann die 300 Stück aufgelaufen sind, macht Let's Encrypt dicht.
  • Create New...