I am using paramiko to open a sftp connection to access a remote file. All my code below in a built in function seems to work only if I don't have the logging enabled for paramiko:
paramiko.util.log_to_file( 'paramiko.log' )
So when I do NOT have the above line of code in my file the code below works:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy( paramiko.AutoAddPolicy() )
client.connect( hostname,user, password)
sftp = client.open_sftp()
file = sftp.open( fpath, mode='r', bufsize=1 )
Otherwise python will hang on this line client.connect( hostname,user, password) and writes to the stderr log like crazy eventually killing the VM my code is running on.
Specifically paramiko hangs on this line:
t.start_client()
within the client.connect method. Nothing useful comes out in the paramiko log and stderr is filled with errors with no description or tracebacks.
Researching this problem I came across "There is a single import lock available so when a child thread attempts another import it can block it indefinitely" how do I make sure the code opening a sftp connection is never blocked?
This is a bit of a long shot, but I have had issues with logging's use of threads causing deadlock. I was not able to track the exact problem down (though I suspect it may have been exacerbated by the use of subprocess; but I did solve it by disabling the logging module's thread support.
Try this before you activate logging:
import logging
logging.thread = None
I'd be interested to know if this solves your problem or not.
Related
I have just started using ngrok, and while using the standard procedure, I can start the tunnel using ./ngrok tcp 22 and see that tunnel open in my dashboard,
But I would like to use pyngrok, and here when I use:
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok
ngrok.set_auth_token("<NGROK_AUTH_TOKEN>")
pyngrok_config = PyngrokConfig(config_path="/opt/ngrok/ngrok.yml")
ngrok.get_tunnels(pyngrok_config=pyngrok_config)
ssh_url = ngrok.connect()
It connects and generates a tunnel, but I can't see anything open in the dashboard, why?
Maybe because the python script executes and generates URL and then stops and comes out of it, but then how to make it keep running, or how to even start a tunnel using python or even API ? Please suggest the correct script, using python or API?
The thread with the ngrok tunnel will terminate as soon as the Python process terminates. So you are correct, the reason this is happening is because your script is not long lived. The easiest way to accomplish this is by following the example in the documentation.
Another issue is how you're setting the authtoken. Since you're not using the default config_path, you need to set this before setting the authtoken so it gets updated in the correct file (you'd also need to pass it to connect()). There are a couple ways to do this, but the easiest way from the docs is to just update the default config (since that's what will be used if you don't pass a pyngrok_config to any future method calls).
I also see that you're response variable is ssh_url, so you probably want to start a TCP tunnel to a port other than 80 (the default)—perhaps you've configured this in your ngrok.yml, but if not, I've updated the call to connect() to ensure this is the type of tunnel started for you and in case others try to use this same code snippet.
Full disclosure, I am the developer of pyngrok. Here is your code snippet updated with my changes.
import os, time
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok, conf
conf.get_default().config_path = "/opt/ngrok/ngrok.yml"
ngrok.set_auth_token(os.environ.get("NGROK_AUTH_TOKEN"))
ssh_tunnel = ngrok.connect(22, "tcp")
ngrok_process = ngrok.get_ngrok_process()
try:
# Block until CTRL-C or some other terminating event
ngrok_process.proc.wait()
except KeyboardInterrupt:
print(" Shutting down server.")
ngrok.kill()
I'm using the Python library Paramiko to run a command over ssh on another server. The problem I'm facing is that the SSHClient.exec_command() call returns immediately, sending me stdin, stdout, and stderr and giving me no other way I can see to tell if the process is still running or not. I thought that I might try monitoring to see if the streams it returns are still open, but I can't find any way to do this except by trying to read from stdout or stderr, or write to stdin and waiting to receive a ValueError. Can anyone tell me of something I've missed that should work instead?
Thanks to advice from #fixxxer I found what I needed to know. My test code now looks like this:
import paramiko
import time
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('localhost', username='user', password='password')
transport = ssh.get_transport()
channel = transport.open_session()
channel.exec_command('./exec_test.py')
status = channel.recv_exit_status()
This works marvellously. It blocks until the command is finished, then allows me to continue.
I have a program with 2 threads. Every thread sends different commands to remote host and redirect output to file. Threads use different remote hosts. I've created a connection with pxssh and trying to send commands to remote hosts with 'sendline':
s = pxssh.pxssh()
try:
s.login (ip, user, pswd)
except:
logging.error("login: error")
return
logging.debug("login: success")
s.sendline("ls / >> tmpfile.log")
s.prompt()
I can send fixed number of commands (about 500 commands on every host) and after that 'sendline' stops working. Connection is ok, but I can't get commands on remote hosts. It looks like some resources run out... what can it be?
Reposting as an answer, since it solved the issue:
Are you reading in between each write? If the host is producing output and you're not reading it, sooner or later a buffer will fill up and it will block until there's room to write some more. Make sure that before each write, you read any data that's available in the terminal, even if you don't want to do anything with it.
If you really don't care about the output at all, you could create a thread that constantly reads in a loop, so that your main thread can skip reading altogether. If your code needs to do anything with any part of the output, though, don't do this.
This may or may not being a coding issue. It may also be an xinetd deamon issue, i do not know.
I have a python script which is triggered from a linux server running xinetd. Xinetd has been setup to only allow one instance as I only want one machine to be able to connect to the service, which is therefore also limited by IP.
Currently when the client connects to xinetd the service works correctly and the script begins sending its output to the client machine. However, when the client disconnects (i.e: due to reboot), the process is still alive on the server, and this blocks the ability for the client to connect once its finished rebooting or so on.
Q: How can i detect in python that the client has disconnected. Perhaps i can test if stdout is no longer being read from by the client (and then exit the script), or is there a much eaiser way in xinetd to have the child process be killed when the client disconnects ?
(I'm using python 2.4.3 on RHEL5 linux - solutions for 2.4 are needed, but 3.1 solutions would be useful to know also.)
Add a signal handler for SIGHUP. (x)inetd sends this upon the socket disconnecting.
Monitor the signals sent to your proccess. Maybe your script isn't responding to the SIGHUP sent by xinet, monitor the signal and let it die.
You don't seem to get a SIGHUP, but you do get a SIGPIPE, at least so long as you are attempting any IO on the connection. If the application spends long periods of time not doing any IO, then you could just start a thread reading stdin to ensure you get the SIGPIPE as soon as the disconnection occurs. This was good enough for my application but then I didn't use any pipes other than the ones xinetd gave me.
I've seen several places on the net where people talk about the SIGHUP getting sent on client disconnection, so I've written an inetd python script to test out a couple of servers (one inetd and another xinetd), so you could use that to check on the signals getting sent. It just logs what it finds to /var/log/test.log. Perhaps it will be useful.
#!/usr/bin/python
import os, signal, sys
skip = ["SIGKILL", "SIG_DFL", "SIGSTOP", "SIG_IGN", "SIGCLD", "SIGCHLD"]
name_map = {}
identifiers = [i for i in dir(signal) if i.startswith("SIG") and not i in skip]
for i in identifiers:
name_map[getattr(signal, i)] = i
def handler(num, frame):
signame = name_map[num]
os.system("echo handled %s >> /var/log/test.log" % signame)
if __name__ == "__main__":
for id, name in name_map.iteritems():
signal.signal(id, handler)
while True:
print sys.stdin.readline()
sys.stdout.flush()
I want to connect to and execute a process on a remote server using Python. I want to be able to get the return code and stderr (if any) of the process. Has anyone ever done anything like this before. I have done it with ssh, but I want to do it from Python script.
Cheers.
Use the ssh module called paramiko which was created for this purpose instead of using subprocess. Here's an example below:
from paramiko import SSHClient
client = SSHClient()
client.load_system_host_keys()
client.connect("hostname", username="user")
stdin, stdout, stderr = client.exec_command('program')
print "stderr: ", stderr.readlines()
print "pwd: ", stdout.readlines()
UPDATE: The example used to use the ssh module, but that is now deprecated and paramiko is the up-to-date module that provides ssh functionality in python.
Well, you can call ssh from python...
import subprocess
ret = subprocess.call(["ssh", "user#host", "program"]);
# or, with stderr:
prog = subprocess.Popen(["ssh", "user#host", "program"], stderr=subprocess.PIPE)
errdata = prog.communicate()[1]
Maybe if you want to wrap the nuts and bolts of the ssh calls you could use Fabric
This library is geared towards deployment and server management, but it could also be useful for these kind of problems.
Also have a look at Celery. This implements a task queue for Python/Django on various brokers. Maybe an overkill for your problem, but if you are going to call more functions on multiple machines it will save you a lot of headache managing your connections.