I trying to get the expire-dataproc-tag from running gcloud-dataproc-create-cluster using python
I tried subprocess.Popen, the-issue I think due to it's an ERROR or it taking long time to retrieve the result, I end-up with and empty string
I tried command, and command_1 worked fine, the issue appeares when running command_2
import subprocess
command = "echo hello world"
command_1 = "gcloud compute images list --project {project-id} --no-standard-images"
command_2 = 'gcloud beta dataproc clusters create cluster-name --bucket {bucket} --region europe-west1 --zone europe-west1-b --subnet {subnet} --tags {tag} --project {project-id} --service-account {service-account} --master-machine-type n1-standard-16 --master-boot-disk-size 100 --worker-machine-type n1-standard-1 --worker-boot-disk-size 100 --image {image} --max-idle 2h --metadata enable-oslogin=true --properties {properties} --optional-components=ANACONDA,JUPYTER,ZEPPELIN --enable-component-gateway --single-node --no-address'.split(' ')
process = subprocess.Popen(command_2, stdout=subprocess.PIPE, shell=True)
# process.wait()
try:
print('inside-try')
result, err = process.communicate()
result = result.decode('utf-8')
except Exception as e:
print('The Error', e)
print('the result: ', result)
print("the-error: ", err)
the output is
inside-try
ERROR: (gcloud.beta.dataproc.clusters.create) INVALID_ARGUMENT: Dataproc custom image '{image-name}' has expired. Please rebuild this custom image. To extend the custom image expiration date to '2022-02-11T08:29:58.322549Z', please use this cluster property during cluster creation: 'dataproc:dataproc.custom.image.expiration.token=1.{image-name-properties......}'
the result:
the-error: None
I'm trying to get the ERROR: .... output to the result-variable (to be printed after the result)
You're not capturing stderr from the process.
Try:
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True
)
And so err wasn't being set by result, err = process.communicate()
With the above change, err will contain the error message that you're receiving.
I strongly encourage you to consider using Google's SDKs to interact with its services. Not only are these easier to use but, instead of shipping strings in/out of sub-processes, you can ship Python objects.
Here's the documentation for Creating a Dataproc cluster in Python.
Related
I run an aws batch jobs. For that i use a python script. This script also runs bash subprocesses.
I used the following function to put message on stdout:
def printf(msg):
print(msg, file=sys.stdout)
sys.stdout.flush()
And for exemple :
printf('Downloading misc inputs')
With this i can send some messages to aws cloudwatch.
But i would like to know if there is a way to send the log of one specific Bash command to a specific cloudwatch stream ?
To exec my Bash command i use the function:
def exec_cmd(cmd, shell=True):
printf("Executing %s" % cmd.strip())
if not shell:
p = subprocess.Popen(cmd.split())
else:
p = subprocess.Popen(cmd, shell=True, executable='/bin/bash')
err = p.wait()
return err
And then :
my_command1 = "/usr/local/bin/mytool arg1 arg2 > /ephemaral/output/log_command1.log 2>&1"
Exit_code1 = exec_cmd(my_command1)
my_command2 = "/usr/local/bin/mytool arg1 arg2 > /ephemaral/output/log_command2.log 2>&1"
Exit_code2 = exec_cmd(my_command2)
I want to know if i can send in Real time the content of log_command1.log to one cloudwatch stream, and the content of log_command2 to another cloudwatch stream.
Thanks in advance, I'm trying to Update Azure VM. Eventually my code gets the certificate from Azure key vault and will save it in local certificate store. I've accomplished successfully using Azure CLI on Bash. code is present below
secret=$(az keyvault secret list-versions --vault-name aqrahyhkeyvault --name certificatename --query "[?attributes.enabled].id" --output tsv)
vm_secret=$(az vm secret format --secrets "$secret" --resource-group RAH-AQ --keyvault aqrahyhkeyvault --certificate-store My)
az vm update -g Archive-WSL -n win10new --set osProfile.secrets="$vm_secret"
I'm using the same command by wrapping it in Python as most of my code is in this format. But it is throwing invalid syntax error. I've tried every possible change with double quotes and shuffling it with no luck
import subprocess
import json
def Update_vm(vault_name,certificate_name,rscgroup_name):
Secret_command=["az","keyvault","secret","list-versions","--vault-name",vault_name,"--name",certificate_name,"--query","[?attributes.enabled].id","--output","tsv"]
create_vm=subprocess.run(Secret_command, stdout=subprocess.PIPE, stderr = subprocess.PIPE)
print(create_vm.stdout)
vm_secret=["az","vm","secret","format","--secrets",create_vm.stdout,"--resource-group",rscgroup_name,"--keyvault",vault_name,"--certificate-store","My"]
vm_new_secret=subprocess.run(vm_secret, stdout=subprocess.PIPE, stderr = subprocess.PIPE)
print(vm_new_secret.stdout)
update_vm_cmd=["az","vm","update","-g",rscgroup_name,"-n",avm_name,"--set","osProfile.secrets"=vm_new_secret.stdout] //Error is present here saying invalid syntax
vm_update=subprocess.run(update_vm_cmd, stdout=subprocess.PIPE, stderr = subprocess.PIPE)
if __name__=="__main__":
rscgroup_name="vm-test-group"
avm_name="testvm1"
avm_image="Win2019Datacenter"
avm_username="azuretest"
avm_password="mypass"
avm_size="Standard_D2_V3"
vault_name = "aqrahkeyvault"
certificate_name = "staticwebsite"
Update_vm(vault_name,certificate_name,rscgroup_name)
I think it might be the way the string is formatted at "osProfile.secrets"=vm_new_secret.stdout
Can you try the following instead?
update_vm_cmd=["az","vm","update","-g",rscgroup_name,"-n",avm_name,"--set",f"osProfile.secrets={vm_new_secret.stdout}"]
I need to ssh into the server and execute few commands and process the response using subprocess. Here's my code
command = 'ssh -t -t buildMachine.X.lan; sudo su - buildbot ; build-set sets/set123'
print "submitting command"
result = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
print "got response"
response,err = result.communicate()
print response
This is getting stuck. I have seen other threads talking about passing a list instead of string to subprocess and removing shell=True.. I did that too but didn't work.
Ultimately I need the result of last command i.e. build-set in order to extract some information out of it.. help?
I figured the solution by using univerio's comment
The command needs to be
command = 'ssh -t -t buildMachine.X.lan \'sudo su - buildbot \'build-set sets/set123\'\''
Individual commands are like argument to previous command. This works.
I'm using the subprocess module of python to run an ssh command over my servers for collecting their disk usage. The one thing on which i'm stuck is if the ssh is not configured in any server then subprocess prompt for the password input which makes my whole script stuck and then i have to voluntarily kill the script itself. I just want it to let go all the servers which asks for password prompt(where ssh is not configured) and continue processing the rest.
def MyFunction(server):
msg=""
ps = subprocess.Popen("ssh -l mygroup %s 'df -k /some/directory'" % server,stdout=subprocess.PIPE,shell=True)
out, err = ps.communicate()
if err != None:
msg += "\n"+err
else:
msg = out
return msg
server_list= ['server A','server B','server C','server D']
for server in server_list:
Final_msg+=MyFunction(server)
Any help would be appreciated! :)
If it is just the thing that you want to avoid ssh ask you for anything, then you can forbid it to do so.
You can use the SSH option
BatchMode
If set to “yes”, passphrase/password querying will be disabled.
This option is useful in scripts and other batch jobs where no user is present to supply the password.
The argument must be “yes” or “no”. The default is “no”.
So just add -o BatchMode=yes:
ps = subprocess.Popen("ssh -o BatchMode=yes -l mygroup %s 'df -k /some/directory'" % server, stdout=subprocess.PIPE, shell=True)
BTW, why do you need shell=True here? Better do
ps = subprocess.Popen(["ssh", "-o", "BatchMode=yes", "-l", "mygroup", server, "df -k /some/directory"], stdout=subprocess.PIPE)
as it is cleaner, safer and internally simpler.
I wanted to know if there is a way to find out the status of the ssh server in the system using Python. I just want to know if the server is active or not (just yes/no). It would help even if it is just a linux command so that I can use python's popen from subprocess module and run that command.
Thanks
PS: I'm using openssh-server on linux (ubuntu 12.04)
If you want a generic way of telling if a process is running you could use ps.
def IsThisProcessRunning( ps_name ):
ps = subprocess.Popen("ps axf | grep %s | grep -v grep" % ps_name,
shell=True, stdout=subprocess.PIPE)
output = ps.stdout.read()
ps.stdout.close()
ps.wait()
if re.search(ps_name, output) is None:
return False
else:
return True
IsThisProcessRunning('/usr/sbin/apache2') # True, if Apache2 is running.
If you have a name that is commonly used; you can specify the location e.g. /usr/sbin/apache2
To be safe you can in combination to looking for the process name, also look for the pid file. This is a common technique used in init.d scripts.
try:
pf = file('/var/run/my_program.pid', 'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if pid:
# Process is running.
Run service sshd status (e.g. via Popen()) and read what it tells you.