GSS-Proxy has been built as a way to provide an abstraction layer between a GSS client (typically an application) and the credentials being used. This is normally used to perform privilege separation, so that a client application can authenticate/use secure channels via GSSAPI without direct access to keying material.
In the NFS server case we extended the GSS-Proxy protocol to be able to talk directly to the in kernel NFSD. The reason we did this was to allow the kernel NFS server to handle big tickets like those containing a MS-PAC payload that may be received by a Microsoft client. For the NFS client case GSS-Proxy allows to operate impersonation behind the scenes so that access to files is "always on" but network level security is maintained.
To use GSS-Proxy with the NFS server you need a recent enough kernel. Anything more recent than 3.10 should work just fine.
At the time of writing the choice between using the classic rpc.scvgssd protocol and the new gssproxy protocol is determined once at runtime. Once the kernel "chooses" the method it cannot be changed. A reboot will be necessary.
The kernel chooses what protocol to use on the first authentication request it receives. It checks if a gssproxy "client" has registered in which case it will bind to use the gssproxy protocol, otherwise it will select the classic rpc.svcgssd protocol and stick to that one.
The gssproxy client registers to the kernel by performing 2 actions in the following order:
The simplest GSS-Proxy configuration file to act as a NFSD helper is the following:
[gssproxy] [service/nfs-server] mechs = krb5 socket = /run/gssproxy.sock cred_store = keytab:/etc/krb5.keytab trusted = yes kernel_nfsd = yes euid = 0
Let's see what this means, line by line:
This is pretty much all that is needed to use GSS-Proxy as the user-space helper for the kernel NSFD daemon when GSSAPI authentication of a client is required.
The NFS client case is a little bit more complicated. For starter, at the time of writing the NFS client code in kernel still uses the classic protocol and can only talk with the rpc.gssd service. The interaction with GSS-Proxy, in the client case, is more subtle.
The way GSS-Proxy is configured in the client case is pretty much the normal userspace interposition, performed at the libgssapi level. The GSSAPI library need to be configured (either in /etc/gss.conf or /etc/gss.conf.d/gssproxy.conf to load the GSS-Proxy interposer plugin which allows GSS-Proxy to intercept GSSAPI calls and sprinkle a little magic on the operations (as well as performing privilege separation).
Example gss.conf
# GSS-API mechanism plugins # # Mechanism Name Object Identifier Shared Library Path Other Options gssproxy_v1 2.16.840.1.113730.3.8.15.1 @libdir@/gssproxy/proxymech.so <interposer>
Once this is done rpc.gssd must be started with the following environment variable defined:
GSS_USE_PROXY="yes"
This instructs the interposer plugin to act, otherwise the interposer plugin will silently fallback to standard GSSAPI behavior.
In the client case the GSS-Proxy is usually employed when special cases need to be handled. For example on unmanned systems people may need to use kerberized NFS but no human being is present to manually create a credential cache. In these cases there are 2 options that can be employed based on the admin preference and the local KDC capabilities. NOTE: we assume a modern version of rpc.gssd which drops privileges to the requesting uid before calling GSSAPI.
The GSS-Proxy daemon can easily initiate client credentials automatically based on a keytab. In this case services (for example an Apache server) on the NFS client machines need to access a krb5 protected mount unattended, but they can be given a Kerberos identity (principal) and matching set of keys (stored in keytabs).
The following configuration allows this mode of operation:
[service/nfs-client] mechs = krb5 cred_store = keytab:/etc/krb5.keytab cred_store = ccache:FILE:/var/lib/gssproxy/clients/krb5cc_%U cred_store = client_keytab:/var/lib/gssproxy/clients/%U.keytab cred_usage = initiate allow_any_uid = yes euid = 0
Let's see what this means, line by line:
You may notice that an explicit UNIX socket is not configured, this mean this service is exposed on the default socket that is available at /var/lib/gssproxy/default.sock by default.
The important bit here is the user's keytabs which are stored under /var/lib/gssproxy/clients. If you have a HTTP service running as user Apache (uid=48) that needs to connect unattended then keys for this user can be stored under /var/lib/gssproxy/clients/48.keytab and this user can now always successfully use kerberized NFS w/o any additional configuration (no cronjobs or other wrapper services necessary). The necessary credentials will be fetched when needed from the KDC, on request.
Note that if credentials are not available GSS-Proxy will return control to rpc.gssd and the usual crawling for credentials will be attempted.
Note that if you need only one specific service to connect you may also use a more restricted service definition.
Example for a system where only the Apache server uses unattended kerberized NFS mounts.
[service/apache] mechs = krb5 cred_store = ccache:FILE:/var/lib/gssproxy/clients/krb5cc_apache cred_store = client_keytab:/var/lib/gssproxy/clients/httpd.keytab cred_usage = initiate euid = 48
In this example, only the Apache user (euid = 48) is permitted to attempt to use the GSS-Proxy service, and a fixed name for ccaches and keytab is used instead of a uid number dependent one.
The previous method works fine if you have a limited set of services running on the machine that need unattended access to a NFS file system, but what to do if you have actual physical users? Handling user's keytabs is annoying, both because it is a liability and because user's passwords can be changed by the users (and when that happens the keytab becomes invalid and needs to be regenerated). It also become cumbersome pretty quickly if there are very many users.
Yet in many cases there are users that run long lasting jobs on relatively trusted (by the admin) machines and want to make sure their jobs won't fail after a day because their credentials expire and they suddenly lose access to the NFS share. (Or maybe the job is even batch scheduled so the user never even has a chance to leave credentials on the machine actually running the job).
In this cases, if the KDC support Constrained Delegation and specifically the s4u2self and s4u2proxy protocol extensions, then the administrator can "empower" the NFS client machine to impersonate arbitrary users. NOTE: not all KDCs support these extensions or have way to properly configure them. The 2 main products that do are Microsoft's Active Directory and the Linux oriented FreeIPA. NOTE: The FreeIPA s4u2proxy implementation can also precisely limit which services can be reached via delegation, for example allow the NFS client machine to obtain tickets exclusively for a specific NFS server, and no other service at all.
The following configuration allows this mode of operation:
[service/nfs-client] mechs = krb5 cred_store = keytab:/etc/krb5.keytab cred_store = ccache:FILE:/var/lib/gssproxy/clients/krb5cc_%U cred_usage = initiate allow_any_uid = yes impersonate = true euid = 0
Let's see what this means, line by line:
So what happens here if a user process tries to walk a mount point ?
If all goes well the NFS Client now can impersonate the user and successfully connect to the NFS server.