Share this page to your:
Mastodon

This post is using OAuth JWTs to secure multi-tenant services.

My application is running in Kubernetes and I have an Ambassador ingress permitting external traffic to come through to the services. The services all check for a valid JWT on the request and verify it is signed properly as well as having the right permissions to execute the particular service function requested. The JWTs are generated by Keycloak, which is running as a service inside Kubernetes.

In a single tenant situation this is simple enough. You'd have a realm in Keycloak that held the various secrets and credentials. You'd send a request to Keycloak specifying that realm and giving the credentials in various ways (client secret, login etc) and you'd get a JWT you could attach to subsequent requests. Each service would validate each JWT it found against that realm (the service caches the keys so it doesn't have to hit Keycloak all the time). All good.

But I have multiple tenants so it is a bit more complicated.

When I say tenants I mean separate legal entities. They should be blissfully unaware of each others' existenance and they certainly cannot mess with each others' data. I also want them to be able to maintain their own lists of users allowed to log in, but not each others' lists, of course. That means each tenant needs a separate Keycloak realm. Keycloak is built to handle multiple tenants and that is what its realms are for. So that's the problem solved for Keycloak. Not so much for everything else though.

The requests mostly originate from Angular applications (portals) running inside browsers. These applications don't actually know what tenant they want. All they have is a URL and different tenants have different URLs. So when a portal requests a JWT it cannot specify the Keycloak realm, it can only specify the URL for the tenant. It looks something like:

https://mytenant.com/auth/realms/bonanza/protocol/openid-connect/auth?client_id=login...

where the realm is bonanza in my case and the client_id is login. I'll mention that client_id further down.

I configure my ingress to edit this based on the host with:

apiVersion: getambassador.io/v2
kind: Mapping
metadata:
  name: keycloak-tenant05-backend
spec:
  prefix: /auth/
  service: keycloak-service:8080
  regex_rewrite:
    pattern: '/(auth\/realms)/([^/]*)/(.*)'
    substitution: '/\1/tenant05/\3'
  host: www.mytenant.com

This causes all requests for a JWT to have the realm switched from bonanza to tenant05 if the host domain name is www.mytenant.com so the returned JWT will be specific to the tenant05 realm.

Naturally I don't do this manually, I have an automated process that creates a new tenant which creates the realm, edits the ingress and a few other things.

But we still haven't solved the whole problem yet. What happens when a service gets the request with this JWT? It wants to check the signature of the JWT and for that it needs a key from the right realm. How does it know what realm?

The solution is a little more editing of the ingress. I add the tenant to the request headers like this:

apiVersion: getambassador.io/v2
kind: Mapping
metadata:
  name: grasshopper-tenant05-backend
spec:
  prefix: /grasshopper/
  add_request_headers:
    X-TenantKey: "tenant05"
  service: grasshopper-service:8080
  host: www.mytenant.com

This is not too different from the earlier ingress edit. It is still specific to the host domain name, but instead of a complex URL rewrite it just adds a header to the request and passes it through. When the service gets the request an interceptor looks for the header and uses that to check the JWT signature. If all is well it passes it on to the rest of the service. The usual Spring Security code manages the permissions.

The interceptor responsible for doing this is an extension of KeycloakSpringBootConfigResolver, mine isn't public domain but it is derived from this one and the technique works just fine.

Having got this working the portal developers came back with a request to run all of this not in Kubernetes but in docker-compose, an environment they can more easily bring up and down on their local machines. I produced a mock mode for each of the services so they would not need to rely on the usual database and queueing infrastructure and could respond with mock data that the developers could edit easily, but what to do about JWT and ingress?

Obviously I had to add Keycloak into the list of services that docker-compose brings up, which was simple enough. When not configured with a database Keycloak uses an in-memory database and it accepts an import file so that I could configure it with the settings I needed. Specifically I needed this configuration to create a realm for the test tenant, a login client and a couple of users. I customise the JWT a little in Keycloak, adding some fields to it, and I needed to configure that as well.

For the ingress I added an nginx service exposed on port 80 and it too has some specific configuration. This is why the start of my docker-compose file looks like:

version: "3"

services:

  nginx: 
    image: nginx:latest
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "8080:80"

  keycloak:
    image: jboss/keycloak:12.0.4
    environment:
      - PROXY_ADDRESS_FORWARDING=true
      - KEYCLOAK_USER=admin
      - KEYCLOAK_PASSWORD=admin
      - KEYCLOAK_IMPORT=/tmp/keycloak-realm.json
    ports:
      - "8080"
    volumes:
      - ./keycloak-realm.json:/tmp/keycloak-realm.json

...

You can see the nginx service and Keycloak. Each of those has a volume that maps a local file into the running container and the service loads configuration from there.

The nginx.conf file has a number of location entries that mimic the mapping entries in the ingress. The one for Keycloak looks like this:

    location /auth {
        set $keycloak_upstream http://keycloak:8080;
        rewrite  ^/(auth\/realms)/([^/]*)/(.*)$  /$1/tenant01/$3 break;
        proxy_pass          $keycloak_upstream;
        proxy_set_header    Host               $host;
        proxy_set_header    X-Real-IP          $remote_addr;
        proxy_set_header    X-Forwarded-For    $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Host   $host;
        proxy_set_header    X-Forwarded-Server $host;
        proxy_set_header    X-Forwarded-Port   8080;
        proxy_set_header    X-Forwarded-Proto  $scheme;
        proxy_set_header X-TenantKey: tenant01;
    }

There is a rewrite that looks slightly different to the mapping but does the same job, and a load of housekeeping stuff that keeps Keycloak happy because it gets very careful about just who is calling it for security reasons. Notice the final proxy_set_header which adds the tenant key, just like the ingress but with different syntax.

The other location entries are simpler:

    location /grasshopper/1 {
        set $grasshopper_upstream http://grasshopper:8080;
        proxy_pass $grasshopper_upstream;
        proxy_set_header X-TenantKey: tenant01;
    }

This is simpler because they don't care so much about who called them, as long as the JWT in the request is okay. I set the tenant key in the header just as I do for the ingress but actually I don't need it here. In my mock environment the developers have no need to deal with multiple tenants so the mock configurations of the various services don't turn on the special KeycloakSpringBootConfigResolver interceptor I need in Kubernetes. The mock services just hardcode the tenant key as tenant01 and I have one realm defined in keycloak for that tenant, so the JWT will be generated and validated correctly, but it isn't multi-tenant because it doesn't have to be.

In fact the portal devs have been instructed to always use realm bonanza regardless of the tenant, including in mock. The ingress or the nginx server will rewrite the realm to the one they really want in both environments. There is, actually, a third environment which we use for administration where we really do want to use the bonanza realm and in that ingress mapper we omit the rewrite and just take what is in the URL.

So with this set up the portal developers can see a fairly close mock of the APIs, including the security constraints in it and they can develop their code against it.

Previous Post Next Post