Skip to content

Latest commit

 

History

History
152 lines (87 loc) · 9.16 KB

File metadata and controls

152 lines (87 loc) · 9.16 KB

Configure Object Storage

A variety of Deis components rely on an object storage system to do their work. These components are:

These components are flexible and can work out of the box with almost any system that is compatible with the S3 API.

Minio

Additionally, Deis ships with a Minio component. This component runs as a Kubernetes service, and the components listed above are configured to automatically look for that service and use it as object storage if it's available.

Configuring the Deis Components

Every Deis component that relies on object storage relies on the following two inputs for configuration:

  • One or more environment variables with host and port to describe where the object storage system is
  • One or more files to provide access credentials for the object storage system.

The subsections herein explain how to configure these two inputs for each applicable component.

Environment Variables

The builder looks for the below environment variables to determine where the object storage system is.

  • DEIS_OUTSIDE_STORAGE - The external S3-compatible object storage system. Commonly used URLs:
    • s3.amazonaws.com for Amazon S3's us-east-1a region
    • storage.googleapis.com for Google Cloud Storage
  • DEIS_MINIO_SERVICE_HOST and DEIS_MINIO_SERVICE_PORT - The in-cluster Minio service. Additional notes about these variables:
    • They are set automatically by Kubernetes if you run Minio as a service in the cluster
    • The Helm chart for Deis installs Minio by default, so the Builder will use Minio by default.

Note that if the builder finds a DEIS_OUTSIDE_STORAGE_HOST environment variable, it will ignore DEIS_MINIO_SERVICE_HOST and DEIS_MINIO_SERVICE_PORT. This behavior means that external object storage takes precedence over Minio.

The builder also uses an environment variable to determine the name of the bucket it should store build artifacts in. It uses git by default, but if your credentials (see below for how credentials are configured) don't have read and write access to that bucket, you'll have to specify a different one.

To do so, simply set the BUCKET environment variable to another value (deis-builds, for example).

Credentials

The builder reads credentials from the below locations on the filesystem.

  • Key: /var/run/secrets/object/store/access-key-id
  • Secret /var/run/secrets/object/store/access-key-secret

A Note on Google Cloud Storage

Google Cloud Storage (GCS) can interoperate with the S3 API using a feature called interoperability. If you choose to use GCS for object storage, you'll have to turn on this interoperability mode. In order to do so, please follow the steps at https://cloud.google.com/storage/docs/migrating?hl=en_US#migration-simple.

When you're done, please set the DEIS_OUTSIDE_STORAGE environment variable to storage.googleapis.com, and ensure the keys that you created (as part of the previous paragraph) are in the correct locations on the filesystem.

Reminder: We recommend storing these and all other credentials as Kubernetes secrets. See the "Configuring Deis Components" section above for more details and examples.

Environment Variables

The slugbuilder looks for the below environment variables to determine where to download code from and upload slugs to.

  • TAR_URL - The location of the .tar archive (which it will build)
  • put_url - The location this component will upload the finished slug to

Credentials

The slugbuilder reads credentials from the below locations on the filesystem.

  • Key: /var/run/secrets/object/store/access-key-id
  • Secret /var/run/secrets/object/store/access-key-secret

Environment Variables

The slugrunner uses the SLUG_URL environment variable to determine where to download the slug (that it will run) from.

Credentials

The slugrunner reads credentials from the below locations on the filesystem.

  • Key: /var/run/secrets/object/store/access-key-id
  • Secret: /var/run/secrets/object/store/access-key-secret

When the controller needs to launch or scale a new buildpack application, it uses a replication controller. Since the slugrunner needs to download the slug to run, it needs the object storage location of the slug and the object storage credentials.

Environment Variables

The controller needs no environment variables for object storage configuration.

Credentials

Since the object storage location information comes from the builder, the controller only needs access to the credentials information. The controller gets this information by accessing the minio-user secret (even if it's not using Minio as the object storage system) directly from the Kubernetes API.

No paths need to be mounted into the pod. Simply ensure that the secret exists in your Kubernetes cluster with the correct credentials.

The registry is configured slightly differently from most of the other components. Read on for details.

Environment Variables

The registry looks for a REGISTRY_STORAGE environment variable, which it then uses as a key to look up the object storage location and authentication information in a configuration file. See below for details on that file.

Credentials

The registry reads the credential information from a /var/run/secrets/deis/registry/creds/objectstorage-keyfile file. This is generated automatically (as part of the helm generate command) based on the configuration options given in the https://github.com/deis/charts/blob/master/workflow-dev/tpl/objectstorage.toml file.

The database is configured slightly differently from the other components. Read the two sections below for details.

Environment Variables

The database looks for a DATABASE_STORAGE environment variable, which it then uses as a key to look up the object storage location and authentication information in a configuration file. See below for the details on that file.

Credentials

Depending on the value of DATABASE_STORAGE, the database will either read the credentials from a generic objectstore secret or from a minio-user secret.in /var/run/secrets/deis/objectstore/creds/ or from /var/run/secrets/deis/database/creds/. The following ways to configure the database are listed below.

Minio

If the DATABASE_STORAGE backend is configured as anything else other than "s3", the database will receive its credentials from /var/run/secrets/deis/database/creds/. This is generated based on the configuration options given in the https://github.com/deis/charts/blob/master/workflow-dev/manifests/deis-minio-secret-user.yaml file. The access key and secret key must be base64 encoded.

Connection details to minio are configured via DEIS_MINIO_SERVICE_HOST and DEIS_MINIO_SERVICE_PORT, both of which are provided by the deis-minio service.

Amazon Simple Storage Service (S3)

If the DATABASE_STORAGE backend is configured as "s3", the database will receive its credentials from /var/run/secrets/deis/objectstore/creds/. This is generated automatically (as part of the helm generate command) based on the configuration options given in the https://github.com/deis/charts/blob/master/workflow-dev/tpl/objectstorage.toml file.

Google Cloud Storage (S3 Compatibility Mode)

If the DATABASE_STORAGE backend is configured as "gcs", the database will receive its credentials from /var/run/secrets/deis/database/creds/. This is generated based on the configuration options given in the https://github.com/deis/charts/blob/master/workflow-dev/manifests/deis-minio-secret-user.yaml file. The access key and secret key must be base64 encoded.

You'll also need to add two environment variables to the https://github.com/deis/charts/blob/master/workflow-dev/tpl/deis-database-rc.yaml file so the database can communicate with Google Cloud Storage instead of minio. Add these values to your spec.template.spec.containers[0].env section, then run helm generate for the settings to take effect the next time you install workflow:

- name: DEIS_MINIO_SERVICE_HOST
  value: storage.googleapis.com
- name: DEIS_MINIO_SERVICE_PORT
  value: "443"