Why is an unknown line being printed to stdout by sshtunnel module? - python

If I run this code:
import sshtunnel
try:
with sshtunnel.open_tunnel("hello", ssh_username="user",
ssh_password="PASSWORD",
remote_bind_address=("1.2.3.4", 23)) as server:
pass
except:
pass
I get this:
2016-04-06 10:47:53,006 | ERROR | Could not resolve IP address for hello, aborting!
I am ignoring the exception, but some random line is showing up for some reason. Why? Is this just some random print statement in some library somewhere? Is this common? Seems like libraries shouldn't really be printing anything to the screen directly. How do I suppress this line?
PS. Code meant to simply replicate the error - obviously using a catch-all for exceptions and doing nothing with them is bad

That looks like a logging statement, specifically logging.error().
It's going to the screen because you haven't set up a log handler which would send it somewhere else. See https://docs.python.org/2/library/logging.html for more information.
It's going to the standard error output (which on a terminal window looks the same as the regular output.) If your code were part of a web service, it would go to the web server's error log.

The first non-keyword argument you pass to open_tunnel is expected to be the destination server (either a string, or an (ip, port) tuple (see the function's docstring).
Eventually, this leads to ssh_host being set to "hello" in the example you gave, which logs an error message in this except block.

"some random line is showing up for some reason." - its a straight-forward error message... the remote server couldn't find a host called "hello". As for why you see it, sshtunnel creates a console logger for error messages if you don't pass a logger yourself. That is a strange thing to do, IMHO. open_tunnel accepts two keywoard arguments: logger is a standard python logger and debug_level is the level to log. See python logging for details on setting up a logger.

The part where you have "Hello" should ideally have the IP address of the SSH Server that you are connecting to. The random line that you mentioned is the ERROR statement that explains it.
Although the program does not have any stdout statements, the ERROR line is from the library sshtunnel.
The statement block in the library uses a raise(exeption) statement with this particular error message. raise function is used to populate messages that can be captured by except statements.

TL;DR
add threaded=False
Hello I had exactly the same ERROR
and it appears after the closing of the tunnel (release):
2018-01-16 10:52:58,685| INFO | Shutting down tunnel ('0.0.0.0', 33553)
2018-01-16 10:52:58,764| INFO | Tunnel: 0.0.0.0:33553 <> sql_database_resolved_on_remote:1433 released
2018-01-16 10:52:58,767| ERROR | Could not establish connection from ('127.0.0.1', 33553) to remote side of the tunnel
2018-01-16 10:52:58,889| DEBUG | Transport is closed
That's somewhat normal, but why it's there?
"Solution-ish"
Add threaded=False.
Both tested with open_tunnel() and SSHTunnelForwarder in with ... as tunnel: context:
import time
import socket
from sshtunnel import SSHTunnelForwarder, open_tunnel
def test_tunnel(tunnel):
# Wait for tunnel to be established ?
#tunnel.check_tunnels()
#time.sleep(0.5)
#print tunnel.tunnel_is_up
s = socket.socket()
s.settimeout(2)
for i in range(0, 10):
"""
I create a new socket each time otherwise I get
the error 106 (errno.EISCONN):
'Transport endpoint is already connected'
"""
s = socket.socket()
s.settimeout(2)
state = s.connect_ex(('localhost', tunnel.local_bind_port))
s.close()
okoko = "OK" if state == 0 else "NO"
print "%s (%s)" % (okoko, state)
with open_tunnel(
'server_i_can_ssh_with_mykey',
ssh_username='myuser',
ssh_pkey='/home/myuser/.ssh/id_rsa',
ssh_private_key_password=None, #no pwd on my key
remote_bind_address=('sql_database_resolved_on_remote', 1433), #e.g.
debug_level=10, # remove this if you test with SSHTunnelForwarder
threaded=False,
) as tunnel:
test_tunnel(tunnel)
hth

Related

Checking FTP connection is valid using NOOP command

I'm having trouble with one of my scripts seemingly disconnecting from my FTP during long batches of jobs. To counter this, I've attempted to make a module as shown below:
def connect_ftp(ftp):
print "ftp1"
starttime = time.time()
retry = False
try:
ftp.voidcmd("NOOP")
print "ftp2"
except:
retry = True
print "ftp3"
print "ftp4"
while (retry):
try:
print "ftp5"
ftp.connect()
ftp.login('LOGIN', 'CENSORED')
print "ftp6"
retry = False
print "ftp7"
except IOError as e:
print "ftp8"
retry = True
sys.stdout.write("\rTime disconnected - "+str(time.time()-starttime))
sys.stdout.flush()
print "ftp9"
I call the function using only:
ftp = ftplib.FTP('CENSORED')
connect_ftp(ftp)
However, I've traced how the code runs using print lines, and on the first use of the module (before the FTP is even connected to) my script runs ftp.voidcmd("NOOP") and does not except it, so no attempt is made to connect to the FTP initially.
The output is:
ftp1
ftp2
ftp4
ftp success #this is ran after the module is called
I admit my code isn't the best or prettiest, and I haven't implemented anything yet to make sure I'm not reconnecting constantly if I keep failing to reconnect, but I can't work out why this isn't working for the life of me so I don't see a point in expanding the module yet. Is this even the best approach for connecting/reconnecting to an FTP?
Thank you in advance
This connects to the server:
ftp = ftplib.FTP('CENSORED')
So, naturally the NOOP command succeeds, as it does not need an authenticated connection.
Your connect_ftp is correct, except that you need to specify a hostname in your connect call.

Is there a way to prevent SMTP Connection Timeout? smtplib, python

I have a script that I need to run indefinitely. The script is set to e-mail me confirmations of certain steps being completed on a daily basis. I am trying to use smtplib for this.
The initial connection is set up so that I will enter my login (written into the script) and a password using getpass at the very initiation of the script. I do not want to leave my password written into the script or even reference by the script say in a config file. Therefore, I want to enter the password at initiation and leave the smtp connection in place.
Re-connecting to the smtp connection as required in the script would defeat the point of being able to step away from the script entirely and leave it running indefinitely.
The example code that I am working with at the moment looks like this:
import smtplib
import getpass
smtpObj = smtplib.SMTP('smtp.gmail.com',587)
smtpObj.ehlo()
smtpObj.starttls()
smtpObj.login('myemail#gmail.com',password = getpass.getpass('Enter Password: '))
Then I enter the password and the output is:
(235, b'2.7.0 Accepted')
So this all works fine.
The problem is then that the script needs to pause for anywhere from a few minutes to potentially a few days depending on the time. This is achieved using a while loop with time conditions until a certain time when the send function will be called:
smtpObj.sendmail('myemail#gmail.com','recipient#gmail.com','This is a test')
However, after a period of about 20/30 mins it seems (i.e. if the pause is sufficient). Then the smptObj.sendmail call will fail due to a timeout error.
The specific error is as follows:
SMTPSenderRefused: (451, b'4.4.2 Timeout - closing connection. l22sm2469172wre.52 - gsmtp', 'myemail#gmail.com')
I have so far tried the following:
Instantiating the connection object with the following timeout parameterisation:
smtpObj = smtplib.SMTP('smtp.gmail.com',587,timeout=None)
smtpObj = smtplib.SMTP('smtp.gmail.com',587,timeout=86400)
Neither of these seem to supress the 'timeout' of the connection (i.e. the same problem persists).
I have also tried this solution approach suggested in this post:
How can I hold a SMTP connection open with smtplib and Python?
However, this has not worked either!
I do want to try and avoid the solution where I would have to re-connect each time when I wanted to send the e-mail, because I only want to enter the password for the connection the once manually, rather than writing it into the script either directly or indirectly.
There surely is a way to deal with the timeout issue! If anyone can help here, then please let me know! Though, if you think that the more 'obvious' solution of re-connecting just before the script needs to send an e-mail is the better way to go, then please let me know.
Thank you!...
If you don't want to include sensitive credentials in a script, you should use env vars.
From a terminal shell (outside of python):
$ EXPORT secretVariable=mySecretValue
$ echo $secretVariable
$ mySecretValue
$
So to leverage this in your code...
>>> import os
>>> myPW = os.getenv('secretVariable')
>>> myPW
'mySecretVal'
>>>
By doing this, you don't have to manually type in the password. Beyond that, it's not very practical to attempt to leave an idle SMTP connection open for potentially days at a time, just implement a try/except structure..
import smtplib
import os
def smtp_connect():
# Instantiate a connection object...
password = os.getenv('secretVariable')
smtpObj = smtplib.SMTP('smtp.gmail.com',587)
smtpObj.ehlo()
smtpObj.starttls()
smtpObj.login('myemail#gmail.com',password=password)
return smtpObj
def smtp_operations():
try:
# SMTP lib operations...
smtpObj.sendmail('myemail#gmail.com','recipient#gmail.com','This is a test')
# SMTP lib operations...
except Exception: # replace this with the appropriate SMTPLib exception
# Overwrite the stale connection object with a new one
# Then, re-attempt the smtp_operations() method (now that you have a fresh connection object instantiated).
smtpObj = smtp_connect()
smtp_operations()
smtpObj = smtp_connect()
smtp_operations()
By replacing except Exception with the actual SMTP Exception that gets raised when you have a stale connection, you'll be sure you're not catching exceptions that don't pertain to the connection being stale.
So, using try/except, the script will attempt to preform the SMTP operations. If the connection is stale, it will instantiate a fresh connection object and then attempt to re-execute itself with the fresh connection object.

How to return "Host is not resolvable" message after gethostbyname failure

This might be a something simple here, but I'm still learning python.
Basically I'm trying to pull an IP address from a hostname, which works fine, but if the host does not resolve it errors. I have it now so that once it resolves the IP address it populates it to a text box, so what i'm trying to do here is if it fails to resolve... To put a message in that text box saying no host found or whatever. I get an error: "socket.gaierror: [Errno 11004] getaddrinfo failed" when it does not resolve.
This is the code i have:
def findip():
host = hname.get() # Pulls host from text box1
ip = gethostbyname(host)
ipaddress.set(ip) #exports to text box2
return
So what i don't know is the If command needed for the failure (if that makes any sense) it would be something like:
if "gethostbyname fails"
ipaddress.set("Host does not resolve")
else
ipaddress.set(ip)
You have to try and catch the exception, this way:
def findip():
host = hname.get()
try:
ip = gethostbyname(host)
except socket.gaierror:
ip = "Host does not resolve"
ipaddress.set(ip)
Just make sure you have the socket module imported or it won't work, if you have no need for the socket module you can import the the exception only, so you need to do either of these:
import socket
import socket.gaierror

Python 3 Read data from URL [duplicate]

I have this simple minimal 'working' example below that opens a connection to google every two seconds. When I run this script when I have a working internet connection, I get the Success message, and when I then disconnect, I get the Fail message and when I reconnect again I get the Success again. So far, so good.
However, when I start the script when the internet is disconnected, I get the Fail messages, and when I connect later, I never get the Success message. I keep getting the error:
urlopen error [Errno -2] Name or service not known
What is going on?
import urllib2, time
while True:
try:
print('Trying')
response = urllib2.urlopen('http://www.google.com')
print('Success')
time.sleep(2)
except Exception, e:
print('Fail ' + str(e))
time.sleep(2)
This happens because the DNS name "www.google.com" cannot be resolved. If there is no internet connection the DNS server is probably not reachable to resolve this entry.
It seems I misread your question the first time. The behaviour you describe is, on Linux, a peculiarity of glibc. It only reads "/etc/resolv.conf" once, when loading. glibc can be forced to re-read "/etc/resolv.conf" via the res_init() function.
One solution would be to wrap the res_init() function and call it before calling getaddrinfo() (which is indirectly used by urllib2.urlopen().
You might try the following (still assuming you're using Linux):
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
res_init = libc.__res_init
# ...
res_init()
response = urllib2.urlopen('http://www.google.com')
This might of course be optimized by waiting until "/etc/resolv.conf" is modified before calling res_init().
Another solution would be to install e.g. nscd (name service cache daemon).
For me, it was a proxy problem.
Running the following before import urllib.request helped
import os
os.environ['http_proxy']=''
response = urllib.request.urlopen('http://www.google.com')

Checking user's network environment - Python application

My Python application connects to a MSSQL database to verify some matter numbers, but this step is only necessary to assist with data integrity and does not bring the user any additional functionality.
The database is only accessible when on my office's local network. How do I check a user's environment during startup to see if this connection can be made?
I'm using pyodbc, but I also need this program to work on OS X, so I'm not importing that module until this check returns a positive result. Any ideas?
You could try something like this:
#!/usr/bin/python
import socket
mssql_server = 'foobar.example.not' # set this to your own value
s = socket.socket()
s.settimeout(3)
try:
server = [x for x in socket.getaddrinfo(mssql_server,1433) if x[0]==2 ][0][-1]
except socket.gaierror:
server = None
if server:
try:
s.connect(server)
except (socket.timeout, socket.error):
server = None
s.close()
... this should attempt to find an IPv4 address for your server (using the first one returned by getaddrinfo()) and then attempt to connect to the MS SQL TCP port (1433 by default) on that system. (Yes, change 1433 if your server is on a different port). If there's a timeout or any other socket error reported on the attempt, then server is set to None. Otherwise you probably have an MS SQL server that you could access.
Verify the host is available using ping before import:
import subprocess
host = "<hostname>"
# add sample text for ping resolution errors here
errors = ("could not find", "unreachable")
ping = "ping {0}".format(host)
(stdout, stderr) = subprocess.Popen(ping, stdout=subprocess.PIPE).communicate()
# stdout is a byte string and must be decoded before compare
if not any(error in stdout.decode("utf-8") for error in errors):
import pyodbc
.....

Categories

Resources