Executing Git server commands through Python shell - python

I'm trying to write my own shell script in Python for SSH to call to (using the SSH command= parameters in authorized_keys files). Currently I'm simply calling the original SSH command (it is set as an environment variable prior to the script being called my SSH). However, I always end up with a git error regarding the repository hanging up unexpectedly.
My Python code is literally:
#!/usr/bin/python
import os
import subprocess
if os.environ('SSH_ORIGINAL_COMMAND') is not None:
subprocess.Popen(os.environ('SSH_ORIGINAL_COMMAND'), shell=True)
else:
print 'who the *heck* do you think you are?'
Please let me know what is preventing the git command from successfully allowing the system to work. For reference, the command that is being called on the server when a client calls git push is git-receive-pack /path/to/repo.git.
Regarding the Python code shown above, I have tried using shell=True and shell=False (correctly passing the command as a list when False) and neither work correctly.
Thank you!

Found the solution!
You'll need to call the communicate() method of the subprocess object created by Popen call.
proc = subprocess.Popen(args, shell=False)
proc.communicate()
I'm not entirely sure why, however I think it has to do with the communicate() method allowing data to also be given via stdin. I thought the process would automatically accept input since I didn't override the input stream at all anywhere, but perhaps a manual call to communicate is needed to kick things off...hopefully someone can weigh in here!
You also can't stdout=subprocess.PIPE as it will cause the command to hang. Again, not sure if this is because of how git works or something to do about the whole process. Hopefully this at least helps someone in the future!

Related

strange problem with running bash-scripts from python in docker

I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)

automated python script via vm

I'm new in a company for IT and very few people here know Python so I can't ask then for help.
The problem: I need to create a script in Python that connects via ssh from my VM to my client server, after I access with my script I need to find a log file and search for a few data.
I tested my script within my Windows with a copy of that file and it searched everything that I need. However, I don't know how to do that connection via SSH.
I tried like this but I don't know where to start:
from subprocess import Popen, PIPE
import sys
ssh = subprocess.check_output(['ssh', 'my_server', 'password'], shell = True)
ssh.stdin.write("cd /path/")
ssh.stdin.write("cat file | grep err|error")
This generates a error "name 'subprocess' is not defined".
I don't understand how to use the subprocess nor how to begin to develop the solution.
Note: I can't use Paramiko because I don't have permission to install packages via pip or download the package manually.
You didn't import subprocess itself so you can't refer to it.
check_output simply runs a process and waits for it to finish, so you can't use that to run a process you want to interact with. But there is nothing interactive here, so let's use that actually.
The first argument to subprocess.Popen() and friends is either a string for the shell to parse, with shell=True; or a list of token passed directly to exec with no shell involved. (On some platforms, passing a list of tokens with shell=True actually happens to work, but this is coincidental, and could change in a future version of Python.)
ssh myhost password will try to run the command password on myhost so that's not what you want. Probably you should simply set things up for passwordless SSH in the first place.
... But you can use this syntax to run the commands in one go; just pass the shell commands to ssh as a string.
from subprocess import check_output
#import sys # Remove unused import
result = check_output(['ssh', 'my_server',
# Fix quoting and Useless Use of Cat, and pointless cd
"grep 'err|error' /path/file"])

Subprocess starting two processes instead of one

I'm using subprocess to start a process and let it run in the background, it's a server application. The process itself is a java program with a thin wrapper (which among other things, means that I can just launch it as an executable without having to call java explicitly).
I'm using Popen to run the process and when I set shell=False, it runs but it spawns two processes instead of one. The first process has init as its parent and when I inspect it via ps, it just displays the raw command. However, the second process displays with the expanded java arguments (-D and -X flags) - this is what I expect to see and how the process looks when I run the command manually.
Interestingly, when I set shell=True, the command fails. The command does have a help message but it doesn't seem to indicate that there's a problem with my argument list (there shouldn't be). Everything is the same except the shell named argument to Popen.
I'm using Python 2.7 on Ubuntu. Not really sure what's going on here, any help is appreciated. I suppose it's possible that the java command is doing an exec/fork and for some reason, the parent process isn't dying when I start it through Python.
I saw this SO question which looked promising but doesn't change the behavior that I'm experiencing.
This is actually more of a question about the wrapper than about Python -- you would get the same behavior running it from any other language.
To get the behavior you want, the wrapper would want to have the line where it invokes the JVM look as follows:
exec java -D... -cp ... main.class.here "$#"
...as opposed to lacking the exec on front:
java -D... -cp ... main.class.here "$#"
In the former case, the process image of the wrapper is replaced with that of the JVM it invokes; in the latter, the wrapper waits for the JVM to exit, and then continues to run.
If the wrapper does any cleanup after JVM exit, using exec will prevent this from happening and would thus be the Wrong Thing -- in this case, you would want the wrapper to still exist while the JVM runs, as otherwise it would be unable to perform cleanup afterwards.
Be aware that if the wrapper is responsible for detaching the subprocess, it needs to be able to close open file handles for this to happen correctly. Consider passing close_fds=True to your Popen call if your parent process has more file descriptors than only stdin, stdout and stderr open.

Running a long running process via Python Popen

So, I thought it would be cool if i could get my dev env up and running in a single fell swoop with some python magic. Various DBs, webserver etc.
However, every variation on the below that i have tried on the following seems to fail with 'file not found'.
p2 = Popen(["exec", "/path/to/redis/server"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
Running the command directly from the shell (i.e. exec /path/to/redis/server) works just fine. Strangely enough, a simple command line uptime seems to work fine.
Any clues as to what is going on? Also, whilst we are on the topic, is multiprocessing the thing to use when i want to run many of these external processes in parallel?
Thanks
exec is a builtin command in bash, not an executable. The file not found error probably comes from exec not being found in the $PATH.
I would try ommitting "exec" in the Popen call.

[Python]How to launch a program using Thread

professionals
I know how to launch a command in Linux's terminal via process, sth likes following:
import subprocess
subprocess.Popen('ifconfig -a')
But this is opened in process, how can I launch that in a thread instead?
I know "thread.start_new_thread", while this should call a function. Within the function, I still have to use subprocess. And this just to open a process again..
Thank you for your help.
Respectfully..
A command like ifconfig always runs in a separate process. There is no way to run that command within only a "thread" of your application.
Perhaps you could provide more detail about why you believe this is necessary, and we may be able to suggest a different approach. For example, if you need to capture the output of the ifconfig command, there are certainly ways of doing that within Python.
As you are calling another process outside of your Python application, I think that there is no solution to make it run inside the Python interpreter.

Categories

Resources