I am running the python script shown below. The script does a ssh to the remote machine and runs a c program in the background. But on running the python script I get the following output:
This above means that a.out was run and the pid is [1] 2115 .
However whn I login to the remote machine and check for a.out via 'ps' command I dont see it.
Another observation is that when i add the delay statement in the python script thread.sleep(20) like , and while the script is still runnuing,
if I check in the remote machine, a.out is active.
#!/usr/bin/python
import HostMod #where ssh function is wrote
COMMAND_PROMPT1 = '[#$] '
p = HostMod.HostModule()
obj1=p.HostLogin('10.200.2.197', 'root', 'newnet') #ssh connection to remote system
obj1.sendline ('./a.out > test.txt &') #sending program to remote machine to executethere
obj1.expect (COMMAND_PROMPT1)
print obj1.before
#a.out program
int main()
{
while(1)
{
sleep(10);
}
return 0;
}
please try giving absolute path of ./a.out
Try using nohup
...
obj1.sendline ('nohup ./a.out > test.txt &') #sending program to remote machine to executethere
but you should really not use a shell to invoke commands over ssh. The ssh protocol has builtin support for running commands. I am not sure how your HostMod module works, but you could try this from your shell (would be easy to port to use subprocess):
ssh somehost nohup sleep 20
<Ctrl+C>
ssh somehost
ps ax | grep sleep
And you should see your sleep process still running. This method does not instantiate a shell, which is much more reliable, since you may or may not have control over which shell is run, what is in your ~/.(...)rc files, etc. All in all, much more portable.
Related
Update:
The problem seems to be quite different from what I initially thought. Information about this is at the bottom of the post.
I've been having quite the struggle with starting my Python-script on a remote Raspberry Pi. I can't quite get my script working, either it's running, but blocking the terminal in Jenkins - blocking the deployment from ever finishing, or it's not starting at all. The weird thing in my opinion, is that when I'm running the exact same commands from my Windows machine, or even directly from the Jenkins node (Ubuntu Server), the script starts absolutely fine and without blocking the terminal. It just behaves like the good background-process I want it to be. But no! Not through the Jenkins pipeline itself.
I'll summarize the setup:
Jenkins controller: Docker container in my Unraid Server with SWAG reverse proxy for GitHub hook.
Jenkins node: VM in Unraid Server.
Target machine: Raspberry Pi, currently available on my local network (will set up VPN for this later).
Target script: A python script to update a small screen through GPIO with information gotten through a HTTP GET request, has a while True loop as the main loop and some time.sleep().
The following is my pipeline:
pipeline {
agent any
stage('Deploy') {
steps {
sh 'ls'
sshagent(credentials: ['jenkinsvm-to-pi'])
{
// Clear any existing instances
sh """
ssh ${target} pkill -f ${filename} &
"""
// Copy downloaded files to target machine.
sh """
scp lcd1602.py ${target}:home/pi/Documents/${filename}
"""
// Run script on target machine.
sh """
ssh ${target} "nohup python3 home/pi/Documents/${filename}" &
"""
}
}
}
}
This kills the process if I had started it through other means than from the pipeline (the pkill -f command). But it does not start it after this. I have tried with lots of variations, with & and without.
The entire pipeline seems quite simple to me, and I can't for the life of me figure out what's causing this issue.
Output from Jenkins
I would greatly appreciate some assistance with this.
Thanks!
EDIT1:
Also tried this setup:
steps {
sshagent(credentials: ['conn-to-mmrasp'])
{
sh "ssh ${target} pkill -f ${filename} || echo 'No process was running'"
sh "scp lcd1602.py ${target}:~/Documents"
sh "ssh ${target} nohup python3 Documents/${filename} &"
}
}
A pretty weird thing is that it seemed to work the first time I updated the pipeline. At least so it seemed, but subsequent attempts didn't work. The console log isn't very helpful either, but I'll include the output.
[ssh-agent] Using credentials pi
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-SSrqVoXbnr0v/agent.40947
SSH_AGENT_PID=40949
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/workspace/AlbinTracker_main#tmp/private_key_4492026892118281470.key (/home/jenkins/workspace/AlbinTracker_main#tmp/private_key_4492026892118281470.key)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ ssh pi#XXX.XXX.XXX.XXX pkill -f lcd1602.py
+ echo No process was running
No process was running
[Pipeline] sh
+ scp lcd1602.py pi#XXX.XXX.XXX.XXX:~/Documents
[Pipeline] sh
+ ssh pi#XXX.XXX.XXX.XXX nohup python3 Documents/lcd1602.py
[Pipeline] }
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 40949 killed;
[ssh-agent] Stopped.
[Pipeline] // sshagent
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
I've done some more testing. It seems like something is wrong even running the script from a terminal-SSH session. When I'm running the command: nohup python3 lcd1602.py & while I'm in a SSH session, there's no output to the nohup.out file, while when I'm not running it in nohup (python3 lcd1602.py &), I get the desired output directly in the terminal. If I run the command without the "&", the output is directed to nohup.out like it should, but this blocks the terminal, which clogs the deploy stage. I'm starting to think this is a problem related to nohup (my use of it) and not jenkins?
EDIT2:
I seem to have fixed a part of the problem, but another part remains.
I added the UNBUFFERED flag -u to the python command, to make sure that the logging went through instead of being stored in the buffer.
I added a bash script to the repository, that did the executing instead of running the python script directly.
Now I have some logging to make sure that I know when the script has been running etc. But I do think that the added bash script was actually what made the script start running. This works on my Raspberry Pi 3 B. When I try to run the same pipeline targeting my Raspberry Pi 1 B, it doesn't work... I'm faced with the same issue as before, the script isn't started at all (the RPi 1B is my original target machine, just did some testing with the RPi 3B).
The current Jenkinsfile:
String target = 'pi#xxx.xxx.xxx.xxx'
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(credentials: ['conn-to-mmrasp'])
{
sh "scp -r ./* ${target}:~"
sh "ssh ${target} bash startScript.sh &"
}
}
}
}
}
The bash script:
filename=lcd1602.py
pkill -f $filename || echo No process running
nohup python3 -u $filename >> nohup.out
Update:
I just want to clarify the situation of the problem, so that you don't have to read the wall of text just to get to this. The same pipeline, with only the target IP changed, works on my RPi3B, starting the script and running the GPIO-screen etc. So it seems to be an issue with RPi1B, something is not entirely right with it, even though I've reflashed the OS. Any ideas on what might be the difference? The SSH configuration should not be a problem, as that would've been clear from the logs in Jenkins.
I have to run python code on remote python process.
Normally, what I would do is:
ssh admin#localhost -p 61234
which opens a interactive python console and I can execute python code directly.
>>> print('3')
3
>>>
But I want to automate this and pass python code as parameter to ssh.
I tried following options:
ssh admin#localhost -v -p 61234 python logs.py
ssh admin#localhost -v -p 61234 nohup python logs.py
ssh admin#localhost -p 61234 < logs.py
cat logs.py | ssh admin#localhost -p 61234 python -
But all options give following error:
shell request failed on channel 0
logs.py:
#!/usr/bin/env python
# tried with and without first line.
print('3')
netstat -anp | grep 61234
tcp 0 0 127.0.0.1:61234 0.0.0.0:* LISTEN 6/python2.7
Is there a way to do this?
Pretty sure this is overkill, but who knows, maybe you need more than just a simple command in the future.
Paramiko package is what you're looking for. The project is full of demos which demonstrate how and in many different ways.
The function you'll find the most useful is paramiko.client.SSHClient.exec_command
Execute a command on the SSH server. A new Channel is opened and the
requested command is executed. The command’s input and output streams
are returned as Python file-like objects representing stdin, stdout,
and stderr.
Demos Folder
In-depth Testing
Interactive.py is a fully interactive TTY (remote terminal control functions).
PyCharm Professional Edition (Python IDE from JetBrains) has tools for remote development, including SSH remoting, and the ability to run a remote interpreter:
I have a parent script (start.py) who's primary purpose is to start background processes and exit. When I ssh directly to remote_host and run the script, it works as expected.
[user#local_host ~]# ssh remote_host
user#remote_host's password: ****
[user#remote_host ~]# time python start.py --config_file /data/workload.pg
real 0m0.037s
user 0m0.025s
sys 0m0.012s
The exit code of this script:
[root#perf72 ~]# echo $?
0
To simplify, instead of establishing the ssh session first and running the command, I want to just execute the command remotely from local_host:
[user#local_host ~]# ssh -o StrictHostKeyChecking=no -i /tmp/tmpqcz5l5il user#remote_host -p 22 "python start.py --config_file /data/workload.pg"
real 12m6.594s
user 0m0.027s
sys 0m0.016s
The problem here is that the ssh session remains open during the life of the background processes and not the life of the start.py script which is less than one second. It should just disconnect when the start.py script exits, but it doesn't.
Do I need a specific sys.exit() signal in the start.py script which will prompt the ssh session to disconnect?
ssh is awaiting output on the called process's stdout, so it can print it if there is any. That file handle is inherited by the subprocesses you're spawning, so it's still open even though the python script has exited, and as long as it's open, ssh will keep waiting.
If you change your ssh command line to run the remote script as "python start.py --config_file /data/workload.pg > /dev/null" instead, the ssh connection will close as soon as the python script does.
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running?
Taken from this answer:
A bash script which starts your python script and restarts it if it didn't exit normally :
#!/bin/bash
until script.py; do
echo "'script.py' exited with code $?. Restarting..." >&2
sleep 1
done
Then just start the monitor script in background:
nohup script_monitor.sh &
Edit for multiple scripts:
Monitor script:
cat script_monitor.sh
#!/bin/bash
until ./script1.py
do
echo "'script1.py' exited with code $?. Restarting..." >&2
sleep 1
done &
until ./script2.py
do
echo "'script2.py' exited with code $?. Restarting..." >&2
sleep 1
done &
scripts example:
cat script1.py
#!/usr/bin/python
import time
while True:
print 'script1 running'
time.sleep(3)
cat script2.py
#!/usr/bin/python
import time
while True:
print 'script2 running'
time.sleep(3)
Then start the monitor script:
./script_monitor.sh
This starts one monitor script per python script in the background.
Try this and enter your script name.
ps aux | grep SCRIPT_NAME
Create a script (say check_process.sh) which will
Find the process id for your python script by using ps command.
Save it in a variable say pid
Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again.
If pid not found, then exit the loop and send mail to your mail_id saying that process is not running.
Now call check_process.sh by a nohup so it will run in background continuously.
I implemented it way back and remember it worked fine.
You can use
runit
supervisor
monit
systemd (i think)
Do not hack this with a script
upstart, on Ubuntu, will monitor your process and restart it if it crashes. I believe systemd will do that too. No need to reinvent this.
I try to send command from python shell to Ubuntu OS to define process existed on particular port and kill it:
port = 8000
os.system("netstat -lpn | grep %s" % port)
Output:
tcp 0 0 127.0.0.1.8000 0.0.0.0:* LISTEN 22000/python
Then:
os.system("kill -SIGTERM 22000")
but got following trace
sh: 1: kill: Illegal option -S
For some reason command can not be transferred to OS with full signal -SIGTERM, but only with -S. I can simply kill this process directly from Terminal, so seems that it's Python or os issue... How can I run kill command using Python?
Any help is appreciated
os.system uses sh to execute the command, not bash which you get in a terminal. The kill builtin in sh requires giving the signal names without the SIG prefix. Change your os.system command line to kill -TERM 22000.
[EDIT] As #DJanssens suggested, using os.kill is a better option than calling the shell for such a simple thing.
You could try using
import signal
os.kill(process.pid, signal.SIGKILL)
documentation can be found here.
you could also use signal.CTRL_C_EVENT, which corresponds to the CTRL+C keystroke event.