Configure NFS Code Deploys
Overview
Starting in Astronomer Software v0.25, you can use an external Network File System (NFS) Volume to deploy DAGs to an Airflow Deployment on Astronomer.
Unlike deploying DAGs via the Astronomer CLI, deploying DAGs to an NFS volume, such as Azure File Storage or Google Cloud Filestore, does not require rebuilding a Docker image and restarting your underlying Airflow service. When a DAG is added to an NFS volume, it automatically appears in the Airflow UI without requiring additional action or causing downtime.
This guide provides the necessary setup for connecting an NFS volume to Astronomer as a DAG deploy option. Some considerations before completing this setup:
- You can configure NFS volumes only to deploy DAGs. To push dependencies or other requirements to your Airflow Deployment, you still need to update your
requirements.txt
andpackages.txt
files and deploy via either the Astronomer CLI or CI/CD. For more information on pushing code to your Airflow environment, read Customize Images. - If you configure an NFS volume for an Airflow Deployment, you can't deploy DAGs via the Astronomer CLI or an Astronomer service account. These options are available only for Deployments configured with an image-based deploy mechanism.
- You can configure NFS volumes only for Airflow Deployments running Airflow 2.0+.
Enable NFS Volume Storage
NFS volume deploys must be explicitly enabled on Astronomer by a System Admin. To enable it, update your config.yaml
file with the following values:
houston:
config:
deployments:
configureDagDeployment: true
nfsMountDagDeployment: true
Once you have saved the file, push the configuration change to your platform as described in Apply a Platform Configuration Change on Astronomer. This process needs to be completed only once per Software installation.
Provision an NFS Volume
While you can use any NFS volume for this step, we recommend using your cloud provider's primary NFS volume solution:
- GCP: Filestore
- Azure: File Storage
- AWS: EFS
For each NFS volume you provision for DAG deploys, you need to configure:
- A directory for DAGs.
- Read access for a user with GID
50000
and UID50000
. For an example setup of this, read Configuring Ip-based access control in Google Cloud's documentation.
Add an NFS Volume to a Deployment
Workspace editors can configure a new or existing Airflow Deployment to use a provisioned NFS volume for DAG deploys. From there, any member of your organization with write permissions to the NFS volume can deploy DAGs to the Deployment. To do so:
In the Software UI, create a new Airflow Deployment or open an existing one.
Go to the DAG Deployment section of the Deployment's Settings page.
For your Mechanism, select NFS Volume Mount:
In the NFS Location field that appears, enter the location of your volume-based DAG directory as
<IP>:/<path>
(for example:192.168.0.1:/path/to/your/dags
).Save your changes.
Note: NFS volumes can also be configured via the Astronomer CLI. To do so, specify the
--nfs-location
flag when runningastro deployment create
orastro deployment update
.