I was looking for an equivalent python code for the below gcloud command :
gcloud beta container hub config-management enable --project $project_id
I couldn't find one in the google documentations.
Thanks
Related
I have a script on my device. I want to run some command in GCP shell from my device. I have seen there is an API called cloud shell API. I don't how it's work.
so, How can I run some commend in GCP cloud shell from my local device using cloud shell API?
I have created an end-to-end CI/CD pipeline in Azure DevOps. I am trying to clone the original repository and create new pipelines using the Azure CLI(v1), using the below command:
az pipelines create --name {PIPELINE_NAME} --description {PIPELINE_DESCRIPTION} --repository {REPOSITORY_NAME} --branch {BRANCH_NAME} --repository-type {tfsgit} --project {PROJECT_NAME} --organization {ORGANIZATION_NAME} --yml-path {YAML_PATH} --service-connection {SERVICE_CONNECTION_NAME} --subscription {SUBCRIPTION_ID} --skip-first-run {true}
I am trying to execute the newly created pipeline using the below command:
az pipelines build queue --branch {BRANCH_NAME} --org {ORGANIZATION_NAME} --project {PROJECT_NAME} --definition-id {PIPELINE_ID} --subscription {SUBCRIPTION_ID}
The problem is, after executing the above commands, I always need to go to the Azure DevOps portal and manually authorize the pipeline to use the Service Connection. It shows a message like this in the portal This pipeline needs permission to access a resource before this run can continue.
I am using this command to log in to the portal echo {PAT} | az devops login --organization {ORGANIZATION_NAME}.
How can I avoid this problem of not going to the portal every time to authorize the pipeline to use the service connection? Is there a way I can do this using the CLI?
PS: All the above commands are executed using Python SubProcess.
According to the description, you could check the following steps:
Please check the security of the service connection: Project Settings>Pipelines>Service connections name (the service connection you use in the YAML pipeline)>…(In the upper right corner of the service connection page)>Security, then you can check whether there is a YAML pipeline you authorized under Pipeline permissions. You could also try to grant access permission to all pipelines.
Check the pipeline permissions. Project Collection Administrators, Project Administrators, and Build Administrators are given all of the above permissions by default.
In the command line I am used to run/create containers with specific GPUs using the --gpus argument:
docker run -it --gpus '"device=0,2"' ubuntu nvidia-smi
The Docker SDK for Python documentation was not very helpful and I could not find a good explanation on how to do the same with the python SDK. Is there a way to do it?
This is how you can run/create docker containers with specific GPUs similar to the --gpu argument:
client.containers.run('ubuntu',
"nvidia-smi",
device_requests=[
docker.types.DeviceRequest(device_ids=["0,2"], capabilities=[['gpu']])])
This way you can also use other GPU resource options specified here:
https://docs.docker.com/config/containers/resource_constraints/
I have 100's of instances I want to make sure that stackdriver monitoring agent is installed in all my gcp instances. Is there any way I can get a list of instances which does not have stackdriver monitoring agent installed in a project or the other way around either by using python modules or gcloud?
You could query the agent version when you connect through SSH from the Cloud Shell
gcloud compute ssh INSTANCE-NAME -- "dpkg-query --show --showformat \
'${Package} ${Version} ${Architecture} ${Status}\n' \
stackdriver-agent" --zone=INSTANCE-ZONE
Option -- of gcloud compute ssh allows you to send arguments to the SSH command , from there you can just list the agent version and if it is not installed it will not return a version
I have deployed my Django application on AWS using Kubernetes.
I am using docker containers to deploy my application.
I have created the custom management command lets say
python manage.py customcommand
I want execute it on kubernetes level.
To get pods i am using
kubctl get pods
I am trying to find solution but not succeeded
Thank you.
You can access a shell in your running pod by using a command similar to the following:
kubectl exec -ti <podname> sh
From the shell prompt you get, you can run your administrative command.