How to Set an Expiration Policy on DigitalOcean Spaces Buckets

DigitalOcean logo

The DigitalOcean Spaces web interface doesn’t provide a way to set bucket lifecycle policies. Here’s how to apply expiration settings that automatically delete files after a set time period. This makes Spaces a more suitable location for rotated backups and log files.

Although this feature isn’t in DigitalOcean’s UI, it is supported by the Spaces backend. Spaces is compatible with Amazon S3 APIs so command-line clients can set S3-based lifecycle policies. The steps detailed below should also work with other object storage providers that implement S3 APIs.

Getting Started

You’ll need the AWS CLI installed to follow along with this tutorial. Once it’s installed, the first task is to supply credentials so the CLI can access your DigitalOcean account.

Head to the DigitalOcean Control Panel in your browser. Click the “API” link at the bottom of the blue sidebar to the left of your screen. Next, click the “Generate New Key” button to the right of the “Spaces access keys” heading.

Give your new key a name, then click the checkmark to complete the process. Your key and its corresponding secret will be displayed. Take note of these values as it’s impossible to retrieve the secret part after you leave the screen.


Return to your terminal and run aws configure. You’ll be asked for your access key and secret. Follow the interactive prompts to supply the values you generated in DigitalOcean’s web interface.

Unfortunately this still isn’t the end of the CLI setup. A significant limitation of the official S3 client is its inability to save custom endpoint URLs alongside your credentials. This means you must explicitly specify the DigitalOcean API URL with every command you issue:

aws s3 ls --endpoint= --bucket my-bucket

The command above will display the objects in the my-bucket bucket of your Spaces account. If you omitted the --endpoint flag, the S3 CLI would assume you’re trying to connect to an AWS account. The endpoint URL needs to match the DigitalOcean datacentre region you created your space in – substitute the nyc3 subdomain for the region you’re using.

Creating Your Policy

Bucket lifecycle policies are define as JSON files that describe the rules you want to apply. Create a new file using your favorite text editor and add the following content:

{ "Rules": [ { "ID": "Prune old files", "Status": "Enabled", "Prefix": "", "Expiration": { "Days": 30 } } ]

The JSON is a declarative representation of the policy to apply. The policy’s attributes and its current state are both specified inside the file.

This example will delete files 30 days after they’re uploaded. Setting Status to Enabled activates the policy, while an empty Prefix applies it to every item in the bucket. You can use the Prefix field to selectively delete only certain objects, such as those in the temp/ subdirectory.

Applying the Policy

Next you need to use the AWS CLI to apply your policy to your bucket:

aws s3api put-bucket-lifecycle-configuration \ --bucket my-bucket --endpoint --lifecycle-configuration file://my-policy.json

Substitute my-bucket with the name of the bucket you want to use your expiration rules with.

The CLI will read your policy JSON file and attach it to the bucket. As long as the Status is Enabled, the lifecycle rules will apply immediately. You’ll start seeing newly uploaded objects leave your bucket as they pass the threshold for expiration.

You can check your policy’s been applied by reading it back from the CLI:

aws s3api get-bucket-lifecycle-configuration \ --bucket my-bucket --endpoint

This should show you the JSON you submitted.

Using Multiple Rules

You can include multiple items in your Rules JSON array. This lets you apply unique expiration policies to different groups of objects, using the Prefix field:

{ "Rules": [ { "ID": "Prune Invoices", "Status": "invoice", "Prefix": "I", "Expiration": { "Days": 90 } }, { "ID": "Prune Quotations", "Status": "Enabled", "Prefix": "Q", "Expiration": { "Days": 30 } } ]

This policy would delete quotations after 30 days while letting invoices live for 90 days. Each bucket supports up to 100 individual lifecycle rules.

Aborting Failed Uploads

Another role of lifecycle policies is cleaning up after failed multipart uploads. When you add large files via the S3 APIs, they’re chunked into streamable sections to improve performance and resiliency to network dropouts.


You can end up with partial chunks sitting in your bucket if an upload part fails to complete. Add the AbortIncompleteMultipartUpload field to a lifecycle policy to remove these redundant chunks.

{ "Rules": [ { "ID": "AbortIncompleteMultipartUpload", "Prefix": "", "Status": "Enabled", "AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 1 } } ]

This policy cleans incomplete upload chunks one day after they were started, potentially freeing up some storage space. When chunks are deleted, you won’t be able to resume the original upload again – clients will need to restart it from the beginning.


DigitalOcean Spaces supports S3 lifecycle policies but you have to apply them using the API. Once configured, your uploads will be deleted automatically after a set time period, giving you confidence that old files aren’t wasting storage space and raising your bill.

Although Spaces implements expiration policies in the same way as S3, other forms of lifecycle policy aren’t available on DigitalOcean’s platform. Another key component of S3’s featureset is the ability to transition objects between storage classes, such as an automatic migration to archival storage after 30 days. Billing for Spaces is much simpler with only one plan available so these policies won’t have an effect on DigitalOcean buckets.