We are attempting to use the paramiko module for creating SSH tunnels on demand to arbitrary servers for purposes of querying remote databases. We attempted to use the forward.py demo that ships with paramiko but the big limitation is there does not seem to be an easy way to close an SSH tunnel and the SSH connection once the socket server is started up.
The limitation we have is that we cannot activate this from a shell and then kill the shell manually to stop the listner. We need to open the SSH connection, tunnel, perform some actions through the tunnel, close the tunnel, and close the SSH connection within python.
I've seen references to a server.shutdown() method but it isn't clear how to implement it correctly.
I'm not sure what you mean by "implement it correctly" -- you just need to keep track of the server object and call shutdown on it when you want. In forward.py, the server isn't kept track of, because the last line of forward_tunnel is
ForwardServer(('', local_port), SubHander).serve_forever()
so the server object is not easily reachable any more. But you can just change that to, e.g.:
global theserver
theserver = ForwardServer(('', local_port), SubHander)
theserver.serve_forever()
and run the forward_tunnel function in a separate thread, so that the main function gets control back (while the serve_forever is running in said separate thread) and can call theserver.shutdown() whenever that's appropriate and needed.
Related
Preconditions:
I want to execute dyamic multiple commands via ssh from python on one remote machine at a time
I couldn't find any existing modules matching my "flavour" (If you care why, see below (*) ;))
Python scripts are running local on a Ubuntu machine
In general for single "one action calls" I simply do a native ssh call using subprocess.Popen and it works fine.
But for multiple subsequent dynamic calls, I don't want to create a new ssh connection for every command, even if the remote host might allow it. I thought of the following solution:
1) Configure my local ssh on Ubuntu to use multiplexing, so as long as a connection is open, it is used instead of creating a new one (https://www.admin-magazin.de/News/Tipps/Mit-SSH-Multiplexing-schneller-einloggen (Sorry, in german))
2) Creating an ssh connection by opening it in a running background thread, where in itself nothing is done, besides maybe a "keepalive" if necessary, or things like that, and keep the connection open till it's closed (i.e. by stopping the thread). (http://sebastiandahlgren.se/2014/06/27/running-a-method-as-a-background-thread-in-python/ )
3) Still executing ssh calls simply via subprocess.Popen, but now automatically using the open connection due to the ssh multiplexing config.
Should this work, or is there a fallacy alert?
(*)What I don't want:
Most solutions/examples I found used paramiko. On my first "happy path" it worked like charm, but the first failure test resulted in an internal AttributeError (https://github.com/paramiko/paramiko/issues/1617) and I don't want to build anything on this.
Other Libs i found like i.e. http://robotframework.org/SSHLibrary/SSHLibrary.html don't seem to have a real community using them.
pexpect....the whole "expect" concept gives me the creeps and should in my opinion only by used if there's absolutly no other reasonable reason ;)
What you've proposed is fine, but you don't even need to keep an ssh connection running in a background thread. If you configure ControlMaster (for reusing an existing connection) and ControlPerist (for keeping the master connection open even when all other connections have closed), then new ssh connections will continue to use the shared connection (as long as they happen before the ControlPersist timeout).
This means that if you set up the ControlMaster configuration external to your code (e.g., in ~/.ssh/ssh_config), your code doesn't even need to be aware of the configuration: it can just continue to call ssh normally, and ssh will take care of reusing the connection.
Let say we have a server application written in Python.
Let also say that this main server process forked two more processes at the startup.
Server awaits its clients, and when one comes decides to which of two forked processes should pass the client's socket.
I do not want to fork a process each time a client comes; I want to have fixed number of servers, but one main server that receives a connection, then pass it to a server that deals with a specific work client asked for.
This should be a DOS attack protection, job separation, etc. etc.
Is there any trick to pass a Python object between started Python programs.
Some shared memory or something like that?
Would pickling the socket object and pushing it through IPC work?
Would pickling the socket object and pushing it through IPC work?
No. Inside that object is a file descriptor or handle to the kernel socket. It's just a number that the process uses to identify the socket when making system calls.
If you pickle that Python socket object and send it to another process, that process will be using a handle for a socket it didn't open. Or worse, that handle may refer to a different open file.
The most efficient way to handle this (on Linux) is like this:
Master process opens listening socket (e.g. TCP port 80)
Master process forks N children who all inherit that open socket
They all call accept() and block, waiting for a new connection
When a new client connects, the kernel will select one of the processes with a handle to that socket to accept the connection; the others will continue to wait
This way, you let the kernel handle the load balancing.
If you don't want this behavior, there is a way (in UNIX) to pass an open socket to another process. Again, this is more than just the handle; the kernel effectively copies the open socket to your processs's open file list. This mechanism is known as SCM_RIGHTS, and you can see an example (in C) here:
http://man7.org/tlpi/code/online/dist/sockets/scm_rights_send.c.html
Otherwise, your master process will need to effectively proxy the connection to the child processes, reducing thr efficiency of the system.
I'm trying to write a wrapper Python script that automatically sets up port forwards to a remote host based on some parameters, and then gives me that shell. Everything works great, up until I want to exit the shell -- at which point, the session hangs and never returns me back to Python. Here's a toy example that does the same thing:
>>> import os
>>> os.system('ssh -L8080:localhost:80 fooserver.net')
user#fooserver.net password:
[fooserver.net]$ hostname
fooserver.net
[fooserver.net]$ exit
(hangs)
I believe this has something to do with the forwarded TCP port being in "TIME_WAIT" and keeping the SSH session alive until it closes, because this doesn't happen if I never request that forwarded port locally. What's the right way to handle this? Can I capture the "exit" from inside Python and then kill the os.system() pipe or something?
I have a Python test program for testing features of another software component, let's call the latter the component under test (COT).
The Python test program is connected to the COT via a persistent TCP connection.
The Python program is using the Python socket API for this.
Now in order to simulate a failure of the physical link, I'd like to have the Python program shut the socket down, but without disconnecting appropriately.
I.e. I don't want anything to be sent on the TCP channel any more, including any TCP SYN/ACK/FIN. I just want the socket to go silent. It must not respond to the remote packets any more.
This is not as easy as it seems, since calling close on a socket will send TCP FIN packets to the remote end. (graceful disconnection).
So how can I kill the socket without sending any packets out?
I cannot shut down the Python program itself, because it needs to maintain other connections to other components.
For information, the socket runs in a separate thread. So I thought of abruptly killing the thread, but this is also not so easy. (Is there any way to kill a Thread?)
Any ideas?
You can't do that from a userland process since in-kernel network stack still holds resources and state related to given TCP connection. Event if you kill your whole process the kernel is going to send a FIN to the other side since it knows what file descriptors your process had and will try to clean them up properly.
One way to get around this is to engage firewall software (on local or intermediate machine). Call a script that tells the firewall to drop all packets from/to given IP and port (that of course would need appropriate administrative privileges).
Contrary to Nikolai's answer, there is indeed a way to reset the connection from userland such that an RST is sent and pending data discarded, rather than a FIN after all the pending data. However as it is more abused than used, I won't publish it here. And I don't know whether it can be done from Python. Setting one of the three possible SO_LINGER configurations and closing will do it. I won't say more than that, and I will say that this technique should only be used for the purpose outlined in the question.
I have a python application , to be more precise a Network Application that can't go down this means i can't kill the PID since it actually talks with other servers and clients and so on ... many € per minute of downtime , you know the usual 24/7 system.
Anyway in my hobby projects i also work a lot with WSGI frameworks and i noticed that i have the same problem even during off-peak hours.
Anyway imagine a normal server using TCP/UDP ( put here your favourite WSGI/SIP/Classified Information Server/etc).
Now you perform a git pull in the remote server and there goes the new python files into the server (these files will of course ONLY affect the data processing and not the actual sockets so there is no need to re-raise the sockets or touch in any way the network part).
I don't usually use File monitors since i prefer to use SIGNAL to wakeup the internal app updater.
Now imagine the following code
from mysuper.app import handler
while True:
data = socket.recv()
if data:
socket.send(handler(data))
Lets imagine that handler is a APP with DB connections, cache connections , etc.
What is the best way to update the handler.
Is it safe to call reload(handler) ?
Will this break DB connections ?
Will DB Connections survive to this restart ?
Will current transactions be lost ?
Will this create anti-matter ?
What is the best-pratice patterns that you guys usually use if there are any ?
It's safe to call reload(handler).
Depends where you initialize your connections. If you make the connections inside handler(), then yes, they'll be garbage collected when the handler() object falls out of scope. But you wouldn't be connecting inside your main loop, would you? I'd highly recommend something like:
dbconnection = connect(...)
while True:
...
socket.send(handler(data, dbconnection))
if for no other reason than that you won't be making an expensive connection inside a tight loop.
That said, I'd recommend going with an entirely different architecture. Make a listener process that does basically nothing more than listen for UDP datagrams, sends them to a messaging queue like RabbitMQ, then waits for the reply message to send the results back to the client. Then write your actual servers that get their requests from the messaging queue, process them, and send a reply message back.
If you want to upgrade the UDP server, launch the new instance listening on another port. Update your firewall rules to redirect incoming traffic to the new port. Reload the rules. Kill the old process. Voila: seamless cutover.
The real win is from uncoupling your backend. Since multiple processes can listen for the same messages from your frontend "proxy" service, you can run several in parallel - on different machines, if you want to. To upgrade the backend, start a new instance then kill the old one so that there's no time when at least one instance isn't running.
To scale your proxy, have multiple instances running on different ports or different hosts, and configure your firewall to randomly redirect incoming datagrams to one of the proxies.
To scale your backend, run more instances.