I would like to pass the variable "NUMBER_CAMS" and value from my python script to a bash environmental file "env_roadrunner"
following is the code that i have written
import subprocess
import os
import sys
import ConfigParser
os.chdir("/home/vasudev/LTECamOrchestrator_docker/tools/")
NUMBER_CAMS=sys.argv[2]
cmd = "xterm -hold -e sudo /home/vasudev/LTECamOrchestrator_docker/tools/create_pcap_replay_encoder " \
" /home/vasudev/LTECamOrchestrator_docker/tools/env_roadrunner"
p = subprocess.Popen([cmd] , shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Following is my bash script which takes environmental variables
#!/bin/bash
# the name or ip address of the orchestrator
ORCHESTRATOR_IP="192.168.212.131"
# the port of the orchestrator
ORCHESTRATOR_PORT=9000
# password for the admin user
ORCHESTRATOR_PASSWORD='.qoq~^c^%l^U#e~'
# number of cameras to create from this pcap file
NUMBER_CAMS="$N"
# three port numbers that are only used internally but need to be free
I wanted to pass the value NUMBER_CAMS through my python script but i am getting following error
Traceback (most recent call last):
File "/home/vasudev/PycharmProjects/Test_Framework/Stream_provider.py", line 19, in <module>
NUMBER_CAMS=sys.argv[2]
IndexError: list index out of range
any suggestions why i am getting index out of range error
You to set the value of N in the environment so that your script can see that value to assign to NUMBER_CANS.
import subprocess
import os
import sys
import ConfigParser
os.environ["N"] = "2" # Must be a string, not an integer
cmd = ["xterm",
"-hold",
"-e",
"sudo",
"-E",
"./create_pcap_replay_encoder",
"env_roadrunner"]
p = subprocess.Popen(cmd,
cwd="/home/vasudev/LTECamOrchestrator_docker/tools/"
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
Note that sudo ignores the current environment by default when running a command; the -E option I added allows create_pcap_replay_encoder to see the inherited environment. However, you can only use -E if sudo is configured to allow the environment to be preserved.
Related
This is a follow-up question of Use tkinter based PySimpleGUI as root user via pkexec.
I have a Python GUI application. It should be able to run as user and as root. For the latter I know I have to set $DISPLAY and $XAUTHORITY to get a GUI application work under root. I use pkexec to start that application as root.
I assume the problem is how I use os.getexecvp() to call pkexec with all its arguments. But I don't know how to fix this. In the linked previous question and answer it works when calling pkexec directly via bash.
For that example the full path of the script should be/home/user/x.py.
#!/usr/bin/env python3
# FILENAME need to be x.py !!!
import os
import sys
import getpass
import PySimpleGUI as sg
def main_as_root():
# See: https://stackoverflow.com/q/74840452
cmd = ['pkexec',
'env',
f'DISPLAY={os.environ["DISPLAY"]}',
f'XAUTHORITY={os.environ["XAUTHORITY"]}',
f'{sys.executable} /home/user/x.py']
# output here is
# ['pkexec', 'env', 'DISPLAY=:0.0', 'XAUTHORITY=/home/user/.Xauthority', '/usr/bin/python3 ./x.py']
print(cmd)
# replace the process
os.execvp(cmd[0], cmd)
def main():
main_window = sg.Window(title=f'Run as "{getpass.getuser()}".',
layout=[[]], margins=(100, 50))
main_window.read()
if __name__ == '__main__':
if len(sys.argv) == 2 and sys.argv[1] == 'root':
main_as_root() # no return because of os.execvp()
# else
main()
Calling that script as /home/user/x.py root means that the script will call itself again via pkexec. I got this output (self translated to English from German).
['pkexec', 'env', 'DISPLAY=:0.0', 'XAUTHORITY=/home/user/.Xauthority', '/usr/bin/python3 /home/user/x.py']
/usr/bin/env: „/usr/bin/python3 /home/user/x.py“: File or folder not found
/usr/bin/env: Use -[v]S, to takeover options via #!
For me it looks like that the python3 part of the command is interpreted by env and not pkexec. Some is not as expected while interpreting the cmd via os.pkexec().
But when I do this on the shell it works well.
pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY python3 /home/user/x.py
Based on #TheLizzard comment.
The approach itself is fine and has no problem.
Just the last element in the command array cmd. It should be splitted.
cmd = ['pkexec',
'env',
f'DISPLAY={os.environ["DISPLAY"]}',
f'XAUTHORITY={os.environ["XAUTHORITY"]}',
f'{sys.executable}',
'/home/user/x.py']
I'm trying to spawn multiple tmux sessions with different environment variables from the same python3 script.
I have been arguing {**os.environ, "CUDA_VISIBLE_DEVICES":str(device_id)} to the env key word argument to subprocess.Popen.
for device_id in device_ids:
new_env = {**os.environ, "CUDA_VISIBLE_DEVICES":str(device_id)}
p = subprocess.Popen([
'tmux', 'new', '-d', "-c", "./", '-s',
sesh_name,
"python3",
path_to_script
], env=new_env)
I'm finding that the CUDA_VISIBLE_DEVICES parameter, however, is equal to the first device_id that I argue across all processes. What is the meaning of this!?
Is this an inherent issue with Popen and the subprocess module? If so, how do I fix it?
I've tried to argue the device id to the script of the new process, but sadly torch won't allow me to update the environment variable after it's been imported and it would be way more trouble than it's worth to rework the code for that.
EDIT: Providing minimal example
Save this script as test.py (or whatever else you fancy):
import subprocess
import os
def sesh(name):
procs = []
for device_id in [4,5,6]:
proc_env = {**os.environ, "CUDA_VISIBLE_DEVICES": str(device_id)}
p = subprocess.Popen(['tmux', 'new', '-d', "-c", "./", '-s', name+str(device_id), "python3", "deleteme.py"], env=proc_env)
procs.append(p)
return procs
if __name__=="__main__":
sesh("foo")
Save this script as deleteme.py within the same directory:
import time
import os
if __name__=="__main__":
print(os.environ)
for i in range(11):
print("running")
if "CUDA_VISIBLE_DEVICES" in os.environ:
print(os.environ["CUDA_VISIBLE_DEVICES"])
else:
print("CUDA_VISIBLE_DEVICES not found")
time.sleep(5)
Then run test.py from the terminal.
$ python3 test.py
Then switch to the tmux sessions to figure out what environment is being created.
For anyone else running into this problem, you can use os.system instead of subprocess.Popen in the following way.
import os
def sesh(name, device_id, script):
command = "tmux new -d -s \"{}{}\" \'export CUDA_VISIBLE_DEVICES={}; python3 {} \'"
command = command.format(
name,
device_id,
device_id,
script
)
os.system(command)
if __name__=="__main__":
sesh("foo", 4, "deleteme.py")
I have written a C code where I have converted one file format to another file format. To run my C code, I have taken one command line argument : filestem.
I executed that code using : ./executable_file filestem > outputfile
Where I have got my desired output inside outputfile
Now I want to take that executable and run within a python code.
I am trying like :
import subprocess
import sys
filestem = sys.argv[1];
subprocess.run(['/home/dev/executable_file', filestem , 'outputfile'])
But it is unable to create the outputfile. I think some thing should be added to solve the > issue. But unable to figure out. Please help.
subprocess.run has optional stdout argument, you might give it file handle, so in your case something like
import subprocess
import sys
filestem = sys.argv[1]
with open('outputfile','wb') as f:
subprocess.run(['/home/dev/executable_file', filestem],stdout=f)
should work. I do not have ability to test it so please run it and write if it does work as intended
You have several options:
NOTE - Tested in CentOS 7, using Python 2.7
1. Try pexpect:
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import pexpect
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
command_output, exitstatus = pexpect.run("/usr/bin/bash -c '{0}'".format(cmd), withexitstatus=True)
if exitstatus == 0:
print(command_output)
else:
print("Houston, we've had a problem.")
2. Run subprocess with shell=true (Not recommended):
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import sys
import subprocess
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
result = subprocess.check_output(shlex.split(cmd), shell=True) # or subprocess.call(cmd, shell=True)
print(result)
It works, but python.org frowns upon this, due to the chance of a shell injection: see "Security Considerations" in the subprocess documentation.
3. If you must use subprocess, run each command separately and take the SDTOUT of the previous command and pipe it into the STDIN of the next command:
p = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE)
stdout_data, stderr_data = p.communicate()
p = subprocess.Popen(cmd, stdin=stdout_data, stdout=PIPE)
etc...
Good luck with your code!
Need help in python subprocess to copy file from host to container
here is the python code which I have tried
import subprocess
output=subprocess.check_output(['docker','ps'],
universal_newlines=True)
x=output.split('\n')
for i in x:
if i.__contains__("name_of_container"):
container_id=i[:12]
subprocess.call(["docker cp", "some_file.py", container_id:"/tmp"])
subprocess.call(['docker','exec','-it', container_id,'bash'])
This should work:
import subprocess
output=subprocess.check_output(['docker','ps'],
universal_newlines=True)
x=output.split('\n')
for i in x:
if i.__contains__("inspiring_sinoussi"):
container_id=i[:12]
container_id_with_path=container_id+":/tmp"
subprocess.call(["docker", "cp", "/root/some_file.py", container_id_with_path])
subprocess.call(['docker','exec','-it', container_id,'bash'])
Actually in the subprocess call all the arguments are separated by comma. In your case container_id:/tmp should be a single argument since there is no any space inbetween them. As container_id is a variable in your case it can't be put with :/tmp together. So I created a new variable container_id_with_path which has the :/tmp path in it.
Running the script gives me the desired result.
$ python copy.py
/ # ls /tmp/
hsperfdata_root tomcat-docbase.1849924566121837123.9090
some_file.py
Some errors of your code:
container_id:"/tmp" is not a valid python grammar
docker cp is not valid command in subprocess
docker cp not in for loop
So, I guess next is your fix:
for i in x:
if i.__contains__("name_of_container"):
container_id = i[:12]
subprocess.call(["docker", "cp", "some_file.py", container_id + ":/tmp"])
I am new to python. I am trying to enter two commands in my script.
1) script filename.txt
2) ssh xyz#xyz.com
My script stops after the 1st command in executed. When I exit out of bash the 2nd command is execute. I tried 2 different scripts both have the same issue.
1) Script-1
import os
import subprocess
from subprocess import call
from datetime import datetime
call (["script","{}.txt".format(str(datetime.now()))])
echo "ssh xyz#xyz.com"
2) Script-2
call (["script","{}.txt".format(str(datetime.now()))])
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd('ssh ssh xyz#xyz.com')