If authentik on your VPS keeps bouncing you through login screens, throwing you into callback loops, or leaving protected apps stuck behind broken outpost paths, the problem is usually not authentik itself. The usual causes are wrong forwarded headers, a bad external URL, an outpost path that is not publicly reachable, or a stale outpost that no longer matches the core instance. The production-safe pattern is simpler than most people make it: terminate TLS cleanly, forward the right headers, keep /outpost.goauthentik.io reachable without auth, use a sane external URL, and stop treating outposts like an afterthought.

The short answer

On a single VPS, the most stable authentik layout is usually this:

  • NGINX on ports 80 and 443
  • authentik core behind it on localhost or a private Docker network
  • the embedded outpost unless you have a real reason to separate it
  • a clean public FQDN for authentik, such as auth.example.com
  • app domains that explicitly proxy /outpost.goauthentik.io without protecting that path

If you do that, most lockout problems disappear. If you do not, the failure modes are repetitive and boring. That is good news, because boring failures are fixable.

Understand the moving parts before you debug the wrong one

Self-hosters get lost because they collapse three different components into one mental box.

  • The authentik server is the core application, admin UI, flows, policies, and API.
  • The outpost is the piece that handles proxying or forward-auth integration for protected apps.
  • The reverse proxy in front, often NGINX or Traefik, is what decides which headers, paths, and schemes authentik actually sees.

If one of those layers lies to the other two, you get loops. Not mysterious loops. Deterministic loops.

On a single VPS, the embedded outpost is often the least painful choice because it runs inside the main server container and uses the same ports as authentik itself. That removes one entire class of version drift and networking mistakes. If you deploy a standalone outpost manually, you are choosing more flexibility in exchange for more lifecycle work. That is fine, but stop pretending it is the same operational burden.

If you want a host where you actually control ports, TLS, container networking, and reverse-proxy behavior instead of negotiating around shared-hosting limitations, this is exactly the kind of workload that belongs on a KVM virtual server.

Mistake 1: your reverse proxy is telling authentik the wrong story

This is the most common cause of redirect loops. Authentik needs to know the original scheme, the original host, the client IP, and whether WebSocket upgrades are happening. If your reverse proxy drops or rewrites those details badly, authentik builds the wrong URLs, sets the wrong expectations, or fails outpost communication.

The minimum NGINX block for the authentik domain should look like this:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

server {
    listen 443 ssl http2;
    server_name auth.example.com;

    ssl_certificate     /etc/letsencrypt/live/auth.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/auth.example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:9000;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}

The critical lines are not decorative. If X-Forwarded-Proto is wrong, authentik may build the wrong callback scheme. If Host is wrong, security checks and outpost communication can go sideways. If WebSocket upgrade headers are missing, outpost communication becomes fragile. If you are still using an ancient proxy path that behaves like HTTP/1.0, you are already outside the supported shape of the platform.

One operational detail that gets missed constantly: if your reverse proxy is not reaching authentik from a private IP range, configure trusted proxy CIDRs explicitly in the authentik server environment. Otherwise the forwarded client information is not treated the way you think it is.

environment:
  AUTHENTIK_LISTEN__TRUSTED_PROXY_CIDRS: "127.0.0.1/32,10.0.0.0/8,192.168.0.0/16,203.0.113.10/32"

Use your real proxy IPs or ranges. Do not cargo-cult example CIDRs into production.

Mistake 2: the external URL is wrong, incomplete, or learned from the wrong place

Authentik’s embedded outpost does not magically know the one true URL you intend users to log in with. On a fresh install, it often learns from the URL used to access it first. If that first touch was a raw IP, a temporary hostname, or a testing URL, the outpost can end up building redirects around the wrong identity of the service.

That is why this field matters so much:

authentik_host: https://auth.example.com/

Use a full URL. Not just a hostname. Not just an FQDN without scheme. A full URL.

This is also where people create pain for themselves with non-standard ports and subpaths. Subpaths have a long history of redirect and callback awkwardness in authentik’s proxy and forward-auth modes. Non-443 external ports can create ugly edge cases where one port works and another does not, especially if your reverse proxy or provider configuration assumes the canonical external host without fully accounting for the alternate port.

The blunt advice is this: if you are self-hosting authentik on one VPS, use clean subdomains. Use auth.example.com for authentik. Use app.example.com for the protected app. Stop trying to be clever with /auth, mixed ports, or app-in-subdirectory routing unless you genuinely need it and are willing to debug all the side effects.

Mistake 3: you protected the outpost path that must stay public

This is the most self-inflicted loop of the bunch. Everything under /outpost.goauthentik.io has to stay reachable without authentication, because that path is part of how authentication is initiated and completed. If you protect it with the same auth layer that depends on it, you create a perfect recursion machine.

Authentik’s own troubleshooting guidance gives you the simplest test that matters:

curl -vk https://app.example.com/outpost.goauthentik.io/ping

A healthy setup returns HTTP 204. If it does not, stop debugging flows and policies. Your request path is broken before the identity layer even gets a fair chance.

For an NGINX-protected app using the embedded outpost, the critical shape looks like this:

location / {
    proxy_pass http://127.0.0.1:8080;
    proxy_set_header Host $host;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_http_version 1.1;

    auth_request /outpost.goauthentik.io/auth/nginx;
    error_page 401 = @goauthentik_proxy_signin;

    auth_request_set $auth_cookie $upstream_http_set_cookie;
    add_header Set-Cookie $auth_cookie;

    auth_request_set $authentik_username $upstream_http_x_authentik_username;
    proxy_set_header X-authentik-username $authentik_username;
}

location /outpost.goauthentik.io {
    proxy_pass http://127.0.0.1:9000/outpost.goauthentik.io;
    proxy_set_header Host $host;
    proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
    proxy_pass_request_body off;
    proxy_set_header Content-Length "";
}

location @goauthentik_proxy_signin {
    internal;
    return 302 /outpost.goauthentik.io/start?rd=$scheme://$http_host$request_uri;
}

The core rule is simple: your app path can be protected, but the outpost path cannot be treated as just another protected location.

Also, if you start seeing NGINX errors about headers being too large while protecting applications with authentik, increase proxy buffers instead of pretending the auth layer is randomly unstable.

Mistake 4: your outpost lifecycle is sloppy

Outposts are not decorative sidecars. They are version-sensitive components in the auth path.

Authentik-managed outposts on Docker or Kubernetes are upgraded automatically by the platform integration. Manually deployed outposts are not. If you run a standalone outpost in its own Compose project and forget to upgrade it when the core instance moves forward, you have created silent drift in the exact component that users depend on to log in.

Worse, the release notes explicitly warn that the authentik core instance and the outposts should be kept on the same version. Ignore that and you are not being flexible. You are building a delayed incident.

If you deploy a standalone outpost manually, pin its image tag and treat it as part of the upgrade plan, not as something the dashboard can worry about later.

services:
  authentik-outpost:
    image: ghcr.io/goauthentik/proxy:2026.2
    container_name: authentik-outpost
    restart: unless-stopped

Then check the Outposts page in the admin UI after every upgrade. If the health and version indicators are unhappy, believe them.

One more version-specific landmine is worth mentioning. From authentik 2025.12 onward, files are served from /files instead of /media. If you carried forward a custom reverse-proxy rule that still expects the old path, branding assets or file-backed elements can break in a way that looks unrelated to login. It is not the main cause of lockout loops, but it is exactly the kind of legacy proxy mistake that makes an upgrade feel cursed.

Mistake 5: you chose the wrong outpost mode for the job

Not every authentik deployment problem is a header problem. Some are topology mistakes.

If you are using forward auth, decide early whether you want single-application mode or domain-level mode. If you are securing one app cleanly on one host, single-application mode is usually easier to reason about. If you are applying the same auth layer across a whole domain of services, domain-level mode can reduce repetitive configuration. But once you mix modes casually, plus subpaths, plus odd external ports, you stop having a design and start having a puzzle.

The rule of thumb is simple:

  • Single app, simple domain, single protected service: keep it single-application.
  • Many services under one domain with the same access policy: consider domain-level forward auth.
  • Apps living under subdirectories instead of dedicated subdomains: expect more edge cases, more callback weirdness, and more time wasted.

For teams building identity and access controls as part of a wider self-hosting stack, this is also where choosing the right host matters. You need predictable networking, real root access, and no panel-imposed routing surprises. That is exactly why this kind of stack belongs on virtual servers with full control, not on a constrained shared environment.

The production-proof layout we would actually trust on one VPS

For a single-node authentik deployment on a Linux VPS, the safest boring design is usually:

  • Debian 12 or Ubuntu 24.04
  • NGINX on the host for TLS termination
  • authentik server and worker in Docker Compose
  • PostgreSQL on the same VPS, internal only
  • embedded outpost first, standalone outpost only when you have a topology reason
  • dedicated subdomains, not subpaths, for auth and protected apps

That layout is not exciting. Good. Identity systems should not be exciting. They should be predictable.

If you are building a wider self-hosted stack around this box, Managing Docker and Containers on a VPS covers the container side of the same discipline. If you are still designing the host from scratch, Self Host Website Guide 2026 is the right infrastructure-level companion. And if you are tempted to let container updates drift without version control, Blind Docker Auto-Updates Are Not Maintenance is relevant here too, because outpost version drift is the same operational sin in smaller clothes.

Lockout recovery checklist

  • Confirm the authentik domain reaches the server cleanly with the correct TLS certificate.
  • Confirm the app domain can return 204 on /outpost.goauthentik.io/ping.
  • Check whether the embedded outpost or standalone outpost is actually the one your proxy points to.
  • Check the outpost version against the core instance version.
  • Check the external URL and authentik_host value for scheme, hostname, and trailing slash sanity.
  • Check that your reverse proxy passes Host, X-Forwarded-Proto, and WebSocket upgrade headers.
  • If using NGINX auth_request, make sure the outpost path is not protected by the same auth layer.

The first commands worth running are boring and direct:

curl -vk https://auth.example.com/
curl -vk https://app.example.com/outpost.goauthentik.io/ping
docker compose ps
docker compose logs --tail 200 authentik-server authentik-worker
docker logs --tail 200 authentik-outpost

If you are debugging forward auth specifically, turn the outpost log level up high enough to see the headers being received. Authentik’s own troubleshooting docs call out trace logging for exactly this reason. It is often the fastest way to prove that the reverse proxy is forwarding the wrong scheme, wrong host, or wrong original URL.

The practical answer

Most authentik lockouts on VPS deployments come from a small set of repeatable mistakes: wrong forwarded headers, the wrong external URL, a blocked or protected outpost path, or outpost/core version drift. The stable fix is not more guessing inside flows. It is cleaner topology. Use a proper FQDN, keep the outpost path public, prefer the embedded outpost on a simple single-node VPS, and treat reverse-proxy configuration as part of the identity system, not as a separate afterthought.

If you want a VPS where you can control the reverse proxy, certificates, ports, and container network yourself, start with ServerSpan virtual servers. Authentik is not hard because identity is magic. It is hard because one bad assumption at the proxy layer can lock you out of the whole thing.

Source & Attribution

This article is based on original data belonging to serverspan.com blog. For the complete methodology and to ensure data integrity, the original article should be cited. The canonical source is available at: Authentik on a VPS: the redirect loops, outposts, and reverse-proxy mistakes that lock you out.