Python subprocess piping to stdin - python

I'm trying to use python Popen to achieve what looks like this using the command line.
echo "hello" | docker exec -i $3 sh -c 'cat >/text.txt'
The goal is to pipe the "hello" text into the docker exec command and have it written to the docker container.
I've tried this but can't seem to get it to work.
import subprocess
from subprocess import Popen, PIPE, STDOUT
p = Popen(('docker', 'exec', '-i', 'nginx-ssl', 'sh', '-c', 'cat >/text.txt'), stdin=subprocess.PIPE)
p.stdin.write('Hello')
p.stdin.close()

You need to give stdin the new line also:
p.stdin.write('Hello\n')
That is the same thing even with sys.stdout. You don't need to give print a new line because it does that for you, but any writing to a file that you do manually, you need to include it. You should use p.communicate('Hello') instead, though. It's made for that.

Related

Unable to print TCPcump information using python subprocess

I wanted to process tcpdump output in a python script and so far I was able to get to this implementation
from subprocess import Popen, PIPE, CalledProcessError
import os
import signal
import time
if __name__=="__main__":
cmd = ["sudo","tcpdump", "-c","1000","-i","any","port","22","-n"]
with Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True) as p:
try:
for line in p.stdout:
print(line,flush=True) # process line here
except KeyboardInterrupt:
print("Quitting")
This is what I uderstood from the second answer of this previously asked question.
While it is not waiting for the subprocess to complete to print the output of the tcpdump, I still get the output in chunks of 20-30 lines at a time. Is there a way to read even if there is a single line in stdout pf the subprocess?
PS: I am running this script on a raspberry Pi 4 with ubuntu server 22.04.1
tcpdump uses a larger buffer if you connect its standard output to a pipe. You can easily see this by running the following two commands. (I changed the count from 1000 to 40 and removed port 22 in order to quickly get output on my system.)
$ sudo tcpdump -c 40 -i any -n
$ sudo tcpdump -c 40 -i any -n | cat
The first command prints one line at a time. The second collects everything in a buffer and prints everything when tcpdump exits. The solution is to tell tcpdump to use line buffering with the -l argument.
$ sudo tcpdump -l -c 40 -i any -n | cat
Do the same in your Python program.
import subprocess
cmd = ["sudo", "tcpdump", "-l", "-c", "40", "-i", "any", "-n"]
with subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=0, text=True) as p:
for line in p.stdout:
print(line.strip())
When I run this, I get one line printed at a time.
Note that universal_newlines is a backward-compatible alias for text, so the latter should be preferred.

Run shell script in python with specific parameters

I wish to run a script, lets call it api.sh. The script takes various arguments,
-t token
-r rules.json
-s data.json
and it is going to create a new json file, e.g. data_2.json.
When I run this in terminal I use the following command:
./api.sh -t token -r rules.json -s data.json > data_2.json
However, I wish to run this command line in Python. Any suggestions are appreciated.
Thanks,
I don't know if it supports python but you can use getopts.
Look here
Does this test.py work:
import subprocess
from subprocess import Popen
path_to_output_file = 'data_2.json'
myoutput = open(path_to_output_file,'w+')
p = Popen(["./api.sh", "-t" , "token", "-r", "rules.json", "-s", "data.json"], stdout=myoutput, stderr=subprocess.PIPE, universal_newlines=True)
output, errors = p.communicate()
You can refer to this for details.

subprocess command execution

What is the best way to execute the below command in Python in a single line?
echo $(readlink /sys/dev/block/$(mountpoint -d /))
Tried using individual os.system(cmd) by separating - "mountpoint -d /" first and taking the output and appending to "readlink /sys/dev/block/${0}".format(out.strip()) and doing an echo works. Tried using subprocess and subprocess.Popen and subprocess.check_output but it raises raise CalledProcessError
cmd = "echo $(readlink /sys/dev/block/$(mountpoint -d /))"
You have to call the subcommand separately. And you can use python methods to read the link:
import subprocess
import os
path = "/"
device = subprocess.run(["mountpoint", "-d", path], stdout=subprocess.PIPE, encoding="utf8").stdout.strip()
link = os.readlink("/sys/dev/block/" + device)
print(link)
You probably want to use something like the following:
cmd = "bash -c 'echo $(readlink /sys/dev/block/$(mountpoint -d /))'"
echo doesn't substitute $() blocks, that's what your shell does, so you have to call the shell. os.system(cmd) should work then.

When using Popen how can I read from stdout?

I'm trying to use Popen to start a shell process, execute commands, print the commands to output, and print the output of the commands (if any). Writing to stdin is working fine, but trying to read from stdout causes the program to freeze.
Here's my program:
with Popen(["/bin/sh"], stdin=PIPE, stdout=PIPE, stderr=STDOUT, text=True, bufsize=0) as proc:
with open("script.txt") as scriptFile:
for line in scriptFile.readlines():
proc.stdin.write(line)
print(f"$ {line.strip()}")
# Reading from proc.stdout locks the program.
for outputLine in proc.stdout.readlines():
print(outputLine)
And here's a simplified script.txt. (The real one creates a git repository and illustrates the use of various git commands.)
mkdir project
cd project
echo "This is line 1.\nThis is line 2." > text1.txt
cat text1.txt
If I don't try to read from stdout all of the commands in script.txt (and my real version with multiple git commands) work fine.
Is there any way to get the output of commands from stdout interspersed with writing them to stdin?

wget missing url with shlex and subprocess

I'm struggling to understand why this fails with a wget: missing URL error:
import shlex
import subprocess
copy_command = "wget -O - 'http://example.com/somepath/somefile.txt?someparam=test' | sshpass -p pass ssh user#localhost -p 2222 \"cat - > /upload/somefile.txt\""
cmd = shlex.split(copy_command, posix=False)
with subprocess.Popen(
cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=True
) as proc:
output, error = proc.communicate()
What am I missing here? If I just give subprocess the copy_command string directly, then it works without issues.
To set up a pipeline requires the parent process to spawn all the programs involved and the connect (pipe) the stdio of one to another.
The Python documentation for subprocess explains how to do this.
It works with string argument andshell=True because then it just hands off the command line to a sub shell, and that shell handles all those details.

Categories

Resources