Managing cleanup of persistent grants
PingFederate is capable of capping the number of persistent grants based on a combination of user, client, grant type, and authentication context.
About this task
Capping the number of persistent grants helps limit the data stored for persistent grants, especially in scenarios where clients frequently request authorization in a single context.
When PingFederate needs to record a new grant, it checks whether such creation will push the number of grants beyond the limit. If it does, PingFederate creates the grant and then removes just enough grants so that the number of grants is capped at the limit. This cleanup task starts from the oldest grant, expired or not, and continues forward if it needs to remove multiple grants. For performance reasons, this cleanup task also limits the number of grants it can remove per attempt. If it cannot remove all grants in excess of the limit, it removes what it can and repeats the process when PingFederate needs to record a new grant.
This cleanup runs on every engine node in a clustered PingFederate environment. Also, it does not replace the cleanup task or the PingDirectory plugin engineered to manage expired grants. Working together, they keep the size of the grant datastore under control.
The default limit is 100 grants per user, client, grant type, and authentication context. Depending on the storage platform, the default maximum number of grants that this cleanup task can remove per attempt varies.
This cleanup task is enabled on new installations. When upgrading from version 9.1 or an earlier version, it is disabled. You can enable it by editing an XML configuration file.
Steps
-
Edit the configuration file relevant to your storage platform.
This configuration file is located in the
<pf_install>/pingfederate/server/default/data/config-store
directory, as described in the following table.Storage platform Configuration file Database server
org.sourceid.oauth20.token.AccessGrantManagerJdbcImpl.xml
PingDirectory
org.sourceid.oauth20.token.AccessGrantManagerLDAPPingDirectoryImpl.xml
Microsoft Active Directory
org.sourceid.oauth20.token.AccessGrantManagerLDAPADImpl.xml
Oracle Unified Directory
org.sourceid.oauth20.token.AccessGrantManagerLDAPOracleImpl.xml
-
Locate for the following comments.
... <!-- Maximum number of persistent grants allowed to store in the database per user, client and grant type and authentication context qualifier. Setting this to a value <= 0 will turn this limit off Default configuration: <c:item name="maxPersistentGrants">100</c:item> --> <c:item name="
maxPersistentGrants
">100</c:item> <!-- Maximum number of persistent grants to delete when max allowed is reached during new grant creation. Setting this to a value <= 0 will turn this limit off Default configuration: <c:item name="maxPersistentGrantsToRemoveBatchSize">n</c:item> --> <c:item name="maxPersistentGrantsToRemoveBatchSize
">n</c:item> ...The
maxPersistentGrants
value represents the maximum number of grants based on a combination of user, client, grant type, and authentication context.The
maxPersistentGrantsToRemoveBatchSize
value represents the maximum number of grants that the cleanup task would remove per attempt. Its default value (n) varies depending on the storage platform,50
for a database server and10
for a directory server.The
maxPersistentGrants
andmaxPersistentGrantsToRemoveBatchSize
items exist only on new installations starting with version 9.2. When upgrading from version 9.1 or an earlier version, the upgrade tools only insert the comments for reference. -
Optional: Adjust the
maxPersistentGrants
andmaxPersistentGrantsToRemoveBatchSize
values.Use integers only.
-
To enable this cleanup task after upgrading from version 9.1 or an earlier version, insert the
maxPersistentGrants
andmaxPersistentGrantsToRemoveBatchSize
items into the configuration file.You can use the default values based on the inline comment. You can also adjust the values to suit the needs of your organization.
-
Save your changes.
-
Restart PingFederate.
For a clustered PingFederate environment, perform these steps on the console node, and then click Replicate Configuration on System → Server → Cluster Management.