I have a shell script (test.sh -> example shown below) which has a infinte while loop and prints some data to screen.
I am calling all my .sh scripts from python and I need to stop the test.sh before calling my other commands
I am using python 2.7 and linux system is on propritary hardware where I cannot install any python modules.
Here is my test.sh
#!/bin/sh
while :
do
echo "this code is in infinite while loop"
sleep 1
done
Here is my python Scripts
import subprocess as SP
SP.call(['./test.sh']) # I need to stop the test.sh in order for python to
# go and execute more commands and call
# another_script.sh
# some code statements
SP.call(['./another_script.sh'])
Well, quick google search made me look into subprocess call and Popen modules . and Popen has a terminate option and it doesn't work for me (or) I'm doing something wrong here
cmd=['test.sh']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.terminate()
Any other suggestions on how I can stop the test.sh from python are highly appreciated
PS: I don't mind to run the test.sh for like T seconds and then stop it
I use tmux for these type of processes, python has a good package libtmux which should solve your problem.
Basically you create a tmux session:
import libtmux
server = libtmux.Server()
session = server.new_session(session_name='my_session_name')
then you create a window to run the command in
window = session.new_window(attach=False, window_name='my_window_name')
command = './my_bash_file.sh'
window.select_pane('0').send_keys(command, enter=True)
You'll be able to run subsequent commands right after this one. To access the tmux session from your bash terminal use tmux attach -t my_session_name you'll then be in a tmux window, the one which ran your bash script.
To kill the tmux window use window.kill_window() there's a lot of options look at the libtmux docs.
The project aileen has some useful tmux commands if you want to see some more implementations.
Related
I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH?
Run nohup python bgservice.py & to get the script to ignore the hangup signal and keep running. Output will be put in nohup.out.
Ideally, you'd run your script with something like supervise so that it can be restarted if (when) it dies.
If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine)
Running a Python Script in the Background
First, you need to add a shebang line in the Python script which looks like the following:
#!/usr/bin/env python3
This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that the first Python interpreter in your $$PATH environment variable is taken. You can also hardcode the path of your Python interpreter (e.g. #!/usr/bin/python3), but this is not flexible and not portable on other machines. Next, you’ll need to set the permissions of the file to allow execution:
chmod +x test.py
Now you can run the script with nohup which ignores the hangup signal. This means that you can close the terminal without stopping the execution. Also, don’t forget to add & so the script runs in the background:
nohup /path/to/test.py &
If you did not add a shebang to the file you can instead run the script with this command:
nohup python /path/to/test.py &
The output will be saved in the nohup.out file, unless you specify the output file like here:
nohup /path/to/test.py > output.log &
nohup python /path/to/test.py > output.log &
If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
# doesn't create nohup.out
nohup command >/dev/null 2>&1
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
# runs in background, still doesn't create nohup.out
nohup command >/dev/null 2>&1 &
You can find the process and its process ID with this command:
ps ax | grep test.py
# or
# list of running processes Python
ps -fA | grep python
ps stands for process status
If you want to stop the execution, you can kill it with the kill command:
kill PID
You could also use GNU screen which just about every Linux/Unix system should have.
If you are on Ubuntu/Debian, its enhanced variant byobu is rather nice too.
You might consider turning your python script into a proper python daemon, as described here.
python-daemon is a good tool that can be used to run python scripts as a background daemon process rather than a forever running script. You will need to modify existing code a bit but its plain and simple.
If you are facing problems with python-daemon, there is another utility supervisor that will do the same for you, but in this case you wont have to write any code (or modify existing) as this is a out of the box solution for daemonizing processes.
Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>.
You can nohup it, but I prefer screen.
Here is a simple solution inside python using a decorator:
import os, time
def daemon(func):
def wrapper(*args, **kwargs):
if os.fork(): return
func(*args, **kwargs)
os._exit(os.EX_OK)
return wrapper
#daemon
def my_func(count=10):
for i in range(0,count):
print('parent pid: %d' % os.getppid())
time.sleep(1)
my_func(count=10)
#still in parent thread
time.sleep(2)
#after 2 seconds the function my_func lives on is own
You can of course replace the content of your bgservice.py file in place of my_func.
Try this:
nohup python -u <your file name>.py >> <your log file>.log &
You can run above command in screen and come out of screen.
Now you can tail logs of your python script by: tail -f <your log file>.log
To kill you script, you can use ps -aux and kill commands.
The zsh shell has an option to make all background processes run with nohup.
In ~/.zshrc add the lines:
setopt nocheckjobs #don't warn about bg processes on exit
setopt nohup #don't kill bg processes on exit
Then you just need to run a process like so: python bgservice.py &, and you no longer need to use the nohup command.
I know not many people use zsh, but it's a really cool shell which I would recommend.
If what you need is that the process should run forever no matter whether you are logged in or not, consider running the process as a daemon.
supervisord is a great out of the box solution that can be used to daemonize any process. It has another controlling utility supervisorctl that can be used to monitor processes that are being run by supervisor.
You don't have to write any extra code or modify existing scripts to make this work. Moreover, verbose documentation makes this process much simpler.
After scratching my head for hours around python-daemon, supervisor is the solution that worked for me in minutes.
Hope this helps someone trying to make python-daemon work
You can also use Yapdi:
Basic usage:
import yapdi
daemon = yapdi.Daemon()
retcode = daemon.daemonize()
# This would run in daemon mode; output is not visible
if retcode == yapdi.OPERATION_SUCCESSFUL:
print('Hello Daemon')
Edit:
The original intent of this question was to find a way to launch an interactive ssh session via a Python script. I'd tried subprocess.call() before and had gotten a Killed response before anything was output onto the terminal. I just assumed this was an issue/limitation with the subprocess module instead of an issue somewhere else.This was found not to be the case when I ran the script on a non-resource limited machine and it worked fine.
This then turned the question into: How can I run an interactive ssh session with whatever resource limitations were preventing it from running?
Shoutout to Charles Duffy who was a huge help in trying to diagnose all of this .
Below is the original question:
Background:
So I have a script that is currently written in bash. It parses the output of a few console functions and then opens up an ssh session based on those parsed outputs.
It currently works fine, but I'd like to expand it's capabilities a bit by adding some flag arguments to it. I've worked with argparse before and thoroughly enjoyed it. I tried to do some flag work in bash, and let's just say it leaves much to be desired.
The Actual Question:
Is it possible to have python to do stuff in a console and then put the user in that console?
Something like using subprocess to run a series of commands onto the currently viewed console? This in contrast to how subprocess normally runs, where it runs commands and then shuts the intermediate console down
Specific Example because I'm not sure if what I'm describing makes sense:
So here's a basic run down of the functionality I was wanting:
Run a python script
Have that script run some console command and parse the output
Run the following command:
ssh -t $correctnode "cd /local_scratch/pbs.$jobid; bash -l"
This command will ssh to the $correctnode, change directory, and then leave a bash window in that node open.
I already know how to do parts 1 and 2. It's part three that I can't figure out. Any help would be appreciated.
Edit: Unlike this question, I am not simply trying to run a command. I'm trying to display a shell that is created by a command. Specifically, I want to display a bash shell created through an ssh command.
Context For Readers
The OP is operating on a very resource-constrained (particularly, it appears, process-constrained) jumphost box, where starting an ssh process as a subprocess of python goes over a relevant limit (on number of processes, perhaps?)
Approach A: Replacing The Python Interpreter With Your Interactive Process
Using the exec*() family of system calls causes your original process to no longer be in memory (unlike the fork()+exec*() combination used to start a subprocess while leaving the parent process running), so it doesn't count against the account's limits.
import argparse
import os
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remote_cmd_str = 'cd /local_scratch/pbs.%s && exec bash -i' % (quote(args.jobid))
local_cmd = [
'/usr/bin/env', 'ssh', '-tt', node, remote_cmd_str
]
os.execv("/usr/bin/env", local_cmd)
Approach B: Generating Shell Commands From Python
If we use Python to generate a shell command, the shell can invoke that command only after the Python process exited, such that we stay under our externally-enforced process limit.
First, a slightly more robust approach at generating eval-able output:
import argparse
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remoteCmd = ['cd', '/local_scratch/pbs.%s' % (args.jobid)]
remoteCmdStr = ' '.join(quote(x) for x in remoteCmd) + ' && bash -l'
cmd = ['ssh', '-t', args.correctnode, remoteCmdStr]
print(' '.join(pipes.quote(x) for x in cmd)
To run this from a shell, if the above is named as genSshCmd:
#!/bin/sh
eval "$(genSshCmd "$#")"
Note that there are two separate layers of quoting here: One for the local shell running eval, and the second for the remote shell started by SSH. This is critical -- you don't want a jobid of $(rm -rf ~) to actually invoke rm.
This is in no way a real answer, just an illustration to my comment.
Let's say you have a Python script, test.py:
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('myarg', nargs="*")
args = parser.parse_args()
print("echo Hello world! My arguments are: " + " ".join(args.myarg))
So, you create a bash wrapper around it, test.sh
set -e
$(python test.py $*)
and this is what you get:
$ bash test.sh
Hello world! My arguments are:
$ bash test.sh one two
Hello world! My arguments are: one two
What is going on here:
python script does not execute commands. Instead, it outputs the commands bash script will run (echo in this example). In your case, the last command will be ssh blabla
bash executes the output of the python script (the $(...) part), passing on all its arguments (the $* part)
you can use argparse inside the python script; if anything is wrong with the arguments, the message will be put to stderr and will not be executed by bash; bash script will stop because of set -e flag
I have ten python scripts in the same directory. How to run all of these from command line, that it will work in background?
I use SSH terminal to connect to server CentOS and run Python script as:
python index.py
But when I close client terminal SSH, proccess is died
You can use the & command to make things run in the background, and nohup so it continues on logout, such as
nohup python index.py &
If you want to run multiple things this way, it's probably easiest to just make a script to start them all (with a shell of your choice):
#!/bin/bash
nohup python index1.py &
nohup python index2.py &
...
As long as you don't need to interact with the scripts once they are started (and don't need any stdout printing) this could be pretty easily automated with another python script using the subprocess module:
for script in listofscripts:
#use subprocess.run() for python 3.x (this blocks until each script terminates)
subprocess.call(["python", script], *args) #use popen if you want non - blocking
*args is a link (it's coloring got overwritten by code highliting
also of note: stdout / stderr printing is possible, just more work..
Lets say I issue a command from the Linux command line. This will cause Linux to create a new Process and lets say that the Process expects to receive the command from the user.
For Example: I will run a python script test.py which will accept a command from the user.
$python test.py
TEST>addController(192.168.56.101)
Controller added
TEST>
The question I have is can I write a script which will go into the command line (TEST>) and issue a command? As far as I know if I write a script to run multiple commands it will wait for the first process to exit before running the next command.
Regards,
Vinay Pai B.H.
You should look into expect. It's a tool that is designed to automate user interaction with commands that need it. The man page explains how to use it.
Seems like there is also pexpect, a Python version of similar functionality.
Assuming the Python script is reading its commands from stdin, you can pass them in with a pipe or a redirection:
$ python test.py <<< 'addController(192.168.56.101)'
$ echo $'addController(192.168.56.101)\nfoo()\nbar()\nbaz()' | python test.py
$ python test.py <<EOF
addController(192.168.56.101)
foo()
bar()
baz()
EOF
If you don't mind waiting for the calls to finish (one at a time) before returning control to your program, you can use the subprocess library. If you want to start something running and not wait for it to finish, you can use the multiprocessing library.
i want to run and control PSFTP from a Python script in order to get log files from a UNIX box onto my Windows machine.
I can start up PSFTP and log in but when i try to run a command remotely such as 'cd' it isn't recognised by PSFTP and is just run in the terminal when i close PSFTP.
The code which i am trying to run is as follows:
import os
os.system("<directory> -l <username> -pw <password>" )
os.system("cd <anotherDirectory>")
i was just wondering if this is actually possible. Or if there is a better way to do this in Python.
Thanks.
You'll need to run PSFTP as a subprocess and speak directly with the process. os.system spawns a separate subshell each time it's invoked so it doesn't work like typing commands sequentially into a command prompt window. Take a look at the documentation for the standard Python subprocess module. You should be able to accomplish your goal from there. Alternatively, there are a few Python SSH packages available such as paramiko and Twisted. If you're already happy with PSFTP, I'd definitely stick with trying to make it work first though.
Subprocess module hint:
# The following line spawns the psftp process and binds its standard input
# to p.stdin and its standard output to p.stdout
p = subprocess.Popen('psftp -l testuser -pw testpass'.split(),
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Send the 'cd some_directory' command to the process as if a user were
# typing it at the command line
p.stdin.write('cd some_directory\n')
This has sort of been answered in: SFTP in Python? (platform independent)
http://www.lag.net/paramiko/
The advantage to the pure python approach is that you don't always need psftp installed.