You should check your users section in your KUBECONFIG file , and there it should look like `
users:
- name: $NAME
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true`
To make kubectl use gke-gcloud-auth-plugin
You will need to set the env variable to use the new plugin before doing the `get-credentials`:
```bash
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $CLUSTER \
--region $REGION \
--project $PROJECT \
--internal-ip
```
I would not have expected the env variable to still be required (now that the gcp auth plugin is completely deprecated) - but it seems it still is.
Your kubeconfig will end up looking like this if the new auth provider is in use.
```yaml
...
- name: $NAME
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
```