CopyPastor

Detecting plagiarism made easy.

Score: 1; Reported for: Exact paragraph match Open both answers

Possible Plagiarism

Plagiarized on 2023-09-19
by Prime108

Original Post

Original - Posted on 2022-12-13
by Ben Walding



            
Present in both answers; Present only in the new answer; Present only in the old answer;

You should check your users section in your KUBECONFIG file , and there it should look like ` users: - name: $NAME user: exec: apiVersion: client.authentication.k8s.io/v1beta1 command: gke-gcloud-auth-plugin installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke provideClusterInfo: true`
To make kubectl use gke-gcloud-auth-plugin
You will need to set the env variable to use the new plugin before doing the `get-credentials`:
```bash export USE_GKE_GCLOUD_AUTH_PLUGIN=True gcloud container clusters get-credentials $CLUSTER \ --region $REGION \ --project $PROJECT \ --internal-ip ```
I would not have expected the env variable to still be required (now that the gcp auth plugin is completely deprecated) - but it seems it still is.

Your kubeconfig will end up looking like this if the new auth provider is in use.
```yaml ... - name: $NAME user: exec: apiVersion: client.authentication.k8s.io/v1beta1 command: gke-gcloud-auth-plugin installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke provideClusterInfo: true ```

        
Present in both answers; Present only in the new answer; Present only in the old answer;