When I try to call ulimit -n from subprocess, i.e.
subprocess.check_output(['ulimit', '-n'])
I get the following error:
OSError: [Errno 2] No such file or directory
This is strange, because the command is valid on the command line. Previous answers to similar questions focus on the need to input the command in the format of a list, which I have done. Other answers have mentioned that alias commands can cause problems for subprocess, but ulimit is not a alias. If I use the shell=True option the code works. But I would like to understand why.
ulimit is a wrapper around a system call to limit the resources of the current process. Because it acts on the current process, it needs to be called on the current process or it has no effect.
For this reason, the shell implements it as a built-in, so there is no such binary.
If you were to create a shell to just call ulimit, and then kill the shell, you have accomplished nothing, because the process which has the limits is then killed. This is why things like cd that affect the current process need to be implemented like that in the shell.
This means that you cannot call it as a subprocess in python. Fortunately, python has a module to wrap it: https://docs.python.org/3/library/resource.html
Related
I'm attempting to run a Linux script through Python's subprocess module. Below is the subprocess command:
result = subprocess.run(['/dir/scripts/0_texts.sh'], shell=True)
print(result)
Here is the 0_texts.sh script file:
cd /dir/TXTs
pylanguagetool text_0.txt > comments_0.txt
The subprocess command executes the script file, writing a new comments_0.txt in the correct directory. However, there's an error in the execution. The comments_0.txt contains an error of "input file is required", and the subprocess result returns returncode=2. When I run the pylanguagetool text_0.txt > comments_0.txt directly in the terminal the command executes properly, with the comments_0.txt written with the proper input file of text_0.txt.
Any suggestions on what I'm missing?
There is some ambiguity here in that it's not obvious which shell is run each time 0_texts.sh is invoked, and whether it has the values you expect of environment variables like PATH, which could result in a different copy of pylanguagetool running from when you call it at the command line.
First I'd suggest removing the shell=True option in subprocess.run, which is only involving another, potentially different shell here. Next I would change subprocess.run(['/dir/scripts/0_texts.sh']) to subprocess.run(['bash', '/dir/scripts/0_texts.sh']) (or whichever shell you wanted to run, probably bash or dash) to remove that source of ambiguity. Finally, you can try using type pylanguagetool in the script, invoking pylanguagetool with its full path, or calling bash /dir/scripts/0_texts.sh from your terminal to debug the situation further.
A bigger-picture issue is, pyLanguageTool is a Python library, so you're almost certainly going to be better off calling its functions from your original Python script directly instead of using a shell script as an intermediary.
action_publisher = subprocess.Popen(
["bash", "-c", "/opt/ros/melodic/bin/rostopic pub -r 20 /robot_operation std_msgs/String start"],
env={'ROS_MASTER_URI': 'http://10.42.0.49:11311\''})
I tried to run it shell=True and shell=False. Also calling it with bash or just running my executable and I am always getting an error:
Traceback (most recent call last):
File "/opt/ros/melodic/bin/rostopic", line 34, in <module>
import rostopic
ImportError: No module named rostopic
How can I make a call of a shell executable with open through python removing this issue? Tried all combination possible and also other stack proposed solution and still, it tries to import the executable instead of running it on a shell.
I can identify several problems with your attempt, but I'm not sure I have identified them all.
You should use subprocess.check_call or subprocess.run if you just want the subprocess to run, and your Python script to wait for that to complete. If you need to use raw subprocess.Popen(), there are several additional required steps of plumbing which you need to do yourself, which check_call or run will perform for you if you use these higher-level functions.
Your use of env will replace the variables in the environment. Copy the existing environment instead so you don't clobber useful settings like PYTHONPATH etc which may well be preventing the subprocess from finding the library it needs.
The shell invocation seems superfluous.
The stray escaped single quote at the end of 'http://10.42.0.49:11311\'' definitely looks wrong.
With that, try this code instead; but please follow up with better diagnostics if this does not yet solve your problem completely.
import subprocess
import os
# ...
env = os.environ.copy()
env['ROS_MASTER_URI'] = 'http://10.42.0.49:11311'
action_publisher = subprocess.run(
["/opt/ros/melodic/bin/rostopic", "pub", "-r", "20",
"/robot_operation", "std_msgs/String", "start"],
env=env, check=True)
If rostopic is actually a Python program, a better solution altogether might be to import it directly instead.
It sounds like the code of that file is trying to import the Python module. If you're getting that import error even when you try to execute the file in bash/from a shell, then it has nothing to do with subprocess.Popen.
From the traceback, it looks like it's a Python file itself and it's trying to import that module, which would explain why you see the issue when executing it from a shell.
Did you go through the installation correctly, specifically the environment setup? http://wiki.ros.org/melodic/Installation/Ubuntu#melodic.2FInstallation.2FDebEnvironment.Environment_setup
It sounds like you need to source a particular file to have the correct paths available where the Python module is located so it can be found when you execute the script.
As far as I can see, your .Popen command will try to execute
bash -c /opt/ros/melodic/bin/rostopic pub -r 20 /robot_operation std_msgs/String start
while bash -c has to be followed by a string. Thus, you may need to add single quotes.
I'm trying to write my own shell script in Python for SSH to call to (using the SSH command= parameters in authorized_keys files). Currently I'm simply calling the original SSH command (it is set as an environment variable prior to the script being called my SSH). However, I always end up with a git error regarding the repository hanging up unexpectedly.
My Python code is literally:
#!/usr/bin/python
import os
import subprocess
if os.environ('SSH_ORIGINAL_COMMAND') is not None:
subprocess.Popen(os.environ('SSH_ORIGINAL_COMMAND'), shell=True)
else:
print 'who the *heck* do you think you are?'
Please let me know what is preventing the git command from successfully allowing the system to work. For reference, the command that is being called on the server when a client calls git push is git-receive-pack /path/to/repo.git.
Regarding the Python code shown above, I have tried using shell=True and shell=False (correctly passing the command as a list when False) and neither work correctly.
Thank you!
Found the solution!
You'll need to call the communicate() method of the subprocess object created by Popen call.
proc = subprocess.Popen(args, shell=False)
proc.communicate()
I'm not entirely sure why, however I think it has to do with the communicate() method allowing data to also be given via stdin. I thought the process would automatically accept input since I didn't override the input stream at all anywhere, but perhaps a manual call to communicate is needed to kick things off...hopefully someone can weigh in here!
You also can't stdout=subprocess.PIPE as it will cause the command to hang. Again, not sure if this is because of how git works or something to do about the whole process. Hopefully this at least helps someone in the future!
I am trying to compile a C program using Python and want to give input using "<" operator but it's not working as expected.
If I compile the C program and run it by giving input though a file it works; for example
./a.out <inp.txt works
But similarly if I try to do this using a Python script, it did not quite work out as expected.
For example:
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x"])
and
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x","<inp.txt"])
Both script ask for input though terminal. But I think in the second script it should read from file. why both the programs are working the same?
To complement #Jonathan Leffler's and #alastair's helpful answers:
Assuming you control the string you're passing to the shell for execution, I see nothing wrong with using the shell for convenience. [1]
subprocess.call() has an optional Boolean shell parameter, which causes the command to be passed to the shell, enabling I/O redirection, referencing environment variables, ...:
subprocess.call("./x <inp.txt", shell = True)
Note how the entire command line is passed as a single string rather than an array of arguments.
[1]
Avoid use of the shell in the following cases:
If your Python code must run on platforms other than Unix-like ones, such as Windows.
If performance is paramount.
If you find yourself "outsourcing" tasks better handled on the Python side.
If you're concerned about lack of predictability of the shell environment (as #alastair is):
subprocess.call with shell = True always creates non-interactive non-login instances of /bin/sh - note that it is NOT the user's default shell that is used.
sh does NOT read initialization files for non-interactive non-login shells (neither system-wide nor user-specific ones).
Note that even on platforms where sh is bash in disguise, bash will act this way when invoked as sh.
Every shell instance created with subprocess.call with shell = True is its own world, and its environment is neither influenced by previous shell instances nor does it influence later ones.
However, the shell instances created do inherit the environment of the python process itself:
If you started your Python program from an interactive shell, then that shell's environment is inherited. Note that this only pertains to the current working directory and environment variables, and NOT to aliases, shell functions, and shell variables.
Generally, that's a feature, given that Python (CPython) itself is designed to be controllable via environment variables (for 2.x, see https://docs.python.org/2/using/cmdline.html#environment-variables; for 3.x, see https://docs.python.org/3/using/cmdline.html#environment-variables).
If needed, you can supply your own environment to the shell via the env parameter; note, however, that you'll have to supply the entire environment in that event, potentially including variables such as USER and HOME, if needed; simple example, defining $PATH explicitly:
subprocess.call('echo $PATH', shell = True, \
env = { 'PATH': '/sbin:/bin:/usr/bin' })
The shell does I/O redirection for a process. Based on what you're saying, the subprocess module does not do I/O redirection like that. To demonstrate, run:
subprocess.call(["sh","-c", "./x <inp.txt"])
That runs the shell and should redirect the I/O. With your code, your program ./x is being given an argument <inp.txt which it is ignoring.
NB: the alternative call to subprocess.call is purely for diagnostic purposes, not a recommended solution. The recommended solution involves reading the (Python 2) subprocess module documentation (or the Python 3 documentation for it) to find out how to do the redirection using the module.
import subprocess
i_file = open("inp.txt")
subprocess.call("./x", stdin=i_file)
i_file.close()
If your script is about to exit so you don't have to worry about wasted file descriptors, you can compress that to:
import subprocess
subprocess.call("./x", stdin=open("inp.txt"))
By default, the subprocess module does not pass the arguments to the shell. Why? Because running commands via the shell is dangerous; unless they're correctly quoted and escaped (which is complicated), it is often possible to convince programs that do this kind of thing to run unwanted and unexpected shell commands.
Using the shell for this would be wrong anyway. If you want to take input from a particular file, you can use subprocess.Popen, setting the stdin argument to a file descriptor for the file inp.txt (you can get the file descriptor by calling fileno() a Python file object).
I'm using subprocess to start a process and let it run in the background, it's a server application. The process itself is a java program with a thin wrapper (which among other things, means that I can just launch it as an executable without having to call java explicitly).
I'm using Popen to run the process and when I set shell=False, it runs but it spawns two processes instead of one. The first process has init as its parent and when I inspect it via ps, it just displays the raw command. However, the second process displays with the expanded java arguments (-D and -X flags) - this is what I expect to see and how the process looks when I run the command manually.
Interestingly, when I set shell=True, the command fails. The command does have a help message but it doesn't seem to indicate that there's a problem with my argument list (there shouldn't be). Everything is the same except the shell named argument to Popen.
I'm using Python 2.7 on Ubuntu. Not really sure what's going on here, any help is appreciated. I suppose it's possible that the java command is doing an exec/fork and for some reason, the parent process isn't dying when I start it through Python.
I saw this SO question which looked promising but doesn't change the behavior that I'm experiencing.
This is actually more of a question about the wrapper than about Python -- you would get the same behavior running it from any other language.
To get the behavior you want, the wrapper would want to have the line where it invokes the JVM look as follows:
exec java -D... -cp ... main.class.here "$#"
...as opposed to lacking the exec on front:
java -D... -cp ... main.class.here "$#"
In the former case, the process image of the wrapper is replaced with that of the JVM it invokes; in the latter, the wrapper waits for the JVM to exit, and then continues to run.
If the wrapper does any cleanup after JVM exit, using exec will prevent this from happening and would thus be the Wrong Thing -- in this case, you would want the wrapper to still exist while the JVM runs, as otherwise it would be unable to perform cleanup afterwards.
Be aware that if the wrapper is responsible for detaching the subprocess, it needs to be able to close open file handles for this to happen correctly. Consider passing close_fds=True to your Popen call if your parent process has more file descriptors than only stdin, stdout and stderr open.