A variety of Deis Workflow components rely on an object storage system to do their work including storing application slugs, Docker images and database logs.
Deis Workflow ships with Minio by default, which provides in-cluster, ephemeral object storage. This means that if the Minio server crashes, all data will be lost. Therefore, Minio should be used for development or testing only.
Every component that relies on object storage uses two inputs for configuration:
- Component-specific environment variables (e.g.
BUILDER_STORAGEandREGISTRY_STORAGE) - Access credentials stored as a Kubernetes secret named
objectstorage-keyfile
The helm classic chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster object storage. Deis Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.
Create storage buckets for each of the Workflow subsystems: builder, registry, and database.
Depending on your chosen object storage you may need to provide globally unique bucket names.
If you provide credentials with sufficient access to the underlying storage, Workflow components will create the buckets if they do not exist.
If applicable, generate credentials that have create and write access to the storage buckets created in Step 1.
If you are using AWS S3 and your Kubernetes nodes are configured with appropriate IAM API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!
If you haven't already fetched the Helm Classic chart, do so with helmc fetch deis/workflow-v2.7.0
Operators should configure object storage by either populating a set of environment variables or editing the the Helm Classic parameters file before running helmc generate. Both options are documented below:
Option 1: Using environment variables
After setting a STORAGE_TYPE environment variable to the desired object storage type ("s3", "gcs", "azure", or "swift"), set the additional variables as required by the selected object storage:
| Storage Type | Required Variables | Notes |
|---|---|---|
| s3 | AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_REGISTRY_BUCKET, AWS_DATABASE_BUCKET, AWS_BUILDER_BUCKET, S3_REGION |
To use IAM credentials, it is not necessary to set AWS_ACCESS_KEY or AWS_SECRET_KEY. |
| gcs | GCS_KEY_JSON, GCS_REGISTRY_BUCKET, GCS_DATABASE_BUCKET, GCS_BUILDER_BUCKET |
|
| azure | AZURE_ACCOUNT_NAME, AZURE_ACCOUNT_KEY, AZURE_REGISTRY_CONTAINER, AZURE_DATABASE_CONTAINER, AZURE_BUILDER_CONTAINER |
|
| swift | SWIFT_USERNAME, SWIFT_PASSWORD, SWIFT_AUTHURL, SWIFT_AUTHVERSION, SWIFT_REGISTRY_CONTAINER, SWIFT_DATABASE_CONTAINER, SWIFT_BUILDER_CONTAINER |
To specify tenant set SWIFT_TENANT if the auth version is 2 or later. |
!!! note
These environment variables should be set before running helmc generate in Step 5.
Option 2: Using template file tpl/generate_params.toml available at $(helmc home)/workspace/charts/workflow-v2.7.0
- Edit Helm Classic chart by running
helmc edit workflow-v2.7.0and look for the template filetpl/generate_params.toml(make sure you have the$EDITORenvironment variable set with your favorite text editor) - Update the
storageparameter to reference the platform you are using, e.g.s3,azure,gcs, orswift - Find the corresponding section for your storage type and provide appropriate values including region, bucket names, and access credentials.
- Save your changes to
tpl/generate_params.toml.
!!! note You do not need to base64 encode any of these values as Helm Classic will handle encoding automatically.
Generate the Workflow chart by running helmc generate -x manifests workflow-v2.7.0 (if you have previously run this step, make sure you add -f to force its regeneration).
Helm Classic stores the object storage configuration as a Kubernetes secret.
You may check the contents of the generated file named deis-objectstorage-secret.yaml in the helmc workspace directory:
$ cat $(helmc home)/workspace/charts/workflow-v2.7.0/manifests/deis-objectstorage-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: objectstorage-keyfile
...
data:
accesskey: bm9wZSBub3BlCg==
secretkey: c3VwZXIgbm9wZSBub3BlIG5vcGUgbm9wZSBub3BlCg==
region: ZWFyZgo=
registry-bucket: bXlmYW5jeS1yZWdpc3RyeS1idWNrZXQK
database-bucket: bXlmYW5jeS1kYXRhYmFzZS1idWNrZXQK
builder-bucket: bXlmYW5jeS1idWlsZGVyLWJ1c2tldAo=
You are now ready to helmc install workflow-v2.7.0 using your desired object storage.
During the helmc generate step, Helm Classic creates a Kubernetes secret in the Deis namespace named objectstorage-keyfile. The exact structure of the file depends on storage backend specified in tpl/generate_params.toml.
# Set the storage backend
#
# Valid values are:
# - s3: Store persistent data in AWS S3 (configure in S3 section)
# - azure: Store persistent data in Azure's object storage
# - gcs: Store persistent data in Google Cloud Storage
# - minio: Store persistent data on in-cluster Minio server
# - swift: Store persistent data in OpenStack Swift object storage cluster
storage = "minio"
Individual components map the master credential secret to either secret-backed environment variables or volumes. See below for the component-by-component locations.
The builder looks for a BUILDER_STORAGE environment variable, which it then uses as a key to look up the object storage location and authentication information from the objectstore-creds volume.
Slugbuilder is configured and launched by the builder component. Slugbuilder reads credential information from the standard objectstorage-keyfile secret.
If you are using slugbuilder as a standalone component the following configuration is important:
TAR_PATH- The location of the application.tararchive, relative to the configured bucket for builder e.g.home/burley-yeomanry:git-3865c987/tarPUT_PATH- The location to upload the finished slug, relative to the configured bucket of builder e.g.home/burley-yeomanry:git-3865c987/pushCACHE_PATH- The location to upload the cache, relative to the configured bucket of builder e.g.home/burley-yeomanry/cache
!!! note These environment variables are case-sensitive.
Slugrunner is configured and launched by the controller inside a Workflow cluster. If you are using slugrunner as a standalone component the following configuration is important:
SLUG_URL- environment variable containing the path of the slug, relative to the builder storage location, e.g.home/burley-yeomanry:git-3865c987/push/slug.tgz
Slugrunner reads credential information from a objectstorage-keyfile secret in the current Kubernetes namespace.
Dockerbuilder is configured and launched by the builder component. Dockerbuilder reads credential information from the standard objectstorage-keyfile secret.
If you are using dockerbuilder as a standalone component the following configuration is important:
TAR_PATH- The location of the application.tararchive, relative to the configured bucket for builder e.g.home/burley-yeomanry:git-3865c987/tar
The controller is responsible for configuring the execution environment for buildpack-based applications. Controller copies objectstorage-keyfile into the application namespace so slugrunner can fetch the application slug.
The controller interacts through Kubernetes APIs and does not use any environment variables for object storage configuration.
The registry looks for a REGISTRY_STORAGE environment variable which it then uses as a key to look up the object storage location and authentication information.
The registry reads credential information by reading /var/run/secrets/deis/registry/creds/objectstorage-keyfile.
This is the file location for the objectstorage-keyfile secret on the Pod filesystem.
The database looks for a DATABASE_STORAGE environment variable, which it then uses as a key to look up the object storage location and authentication information
Minio (DATABASE_STORAGE=minio):
AWS_ACCESS_KEY_IDvia /var/run/secrets/deis/objectstore/creds/accesskeyAWS_SECRET_ACCESS_KEYvia /var/run/secrets/deis/objectstore/creds/secretkeyAWS_DEFAULT_REGIONis the Minio default of "us-east-1"BUCKET_NAMEis the on-cluster default of "dbwal"
AWS (DATABASE_STORAGE=s3):
AWS_ACCESS_KEY_IDvia /var/run/secrets/deis/objectstore/creds/accesskeyAWS_SECRET_ACCESS_KEYvia /var/run/secrets/deis/objectstore/creds/secretkeyAWS_DEFAULT_REGIONvia /var/run/secrets/deis/objectstore/creds/regionBUCKET_NAMEvia /var/run/secrets/deis/objectstore/creds/database-bucket
GCS (DATABASE_STORAGE=gcs):
GS_APPLICATION_CREDSvia /var/run/secrets/deis/objectstore/creds/key.jsonBUCKET_NAMEvia /var/run/secrets/deis/objectstore/creds/database-bucket
Azure (DATABASE_STORAGE=azure):
WABS_ACCOUNT_NAMEvia /var/run/secrets/deis/objectstore/creds/accountnameWABS_ACCESS_KEYvia /var/run/secrets/deis/objectstore/creds/accountkeyBUCKET_NAMEvia /var/run/secrets/deis/objectstore/creds/database-container
Swift (DATABASE_STORAGE=swift):
SWIFT_USERNAMEvia /var/run/secrets/deis/objectstore/creds/usernameSWIFT_PASSWORDvia /var/run/secrets/deis/objectstore/creds/passwordSWIFT_AUTHURLvia /var/run/secrets/deis/objectstore/creds/authurlSWIFT_AUTHVERSIONvia /var/run/secrets/deis/objectstore/creds/authversionSWIFT_TENANTvia /var/run/secrets/deis/objectstore/creds/tenantBUCKET_NAMEvia /var/run/secrets/deis/objectstore/creds/database-container