Here is the relevant code that's causing the Error.
ftp = ftplib.FTP('server')
ftp.login(r'user', r'pass')
#change directories to the "incoming" folder
ftp.cwd('incoming')
fileObj = open(fromDirectory + os.sep + f, 'rb')
#push the file
try:
msg = ftp.storbinary('STOR %s' % f, fileObj)
except Exception as inst:
msg = inst
finally:
fileObj.close()
if '226' not in msg:
#handle error case
I've never seen this error before and any information about why I might be getting it would be useful and appreciated.
complete error message:
[Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
It should be noted that when I manually (i.e. open a dos-prompt and push the files using ftp commands) push the file from the same machine that the script is on, I have no problems.
Maybe you should increase the "timeout" option, and let the server more time to response.
In my case, changing to ACTV mode, as #Anders Lindahl suggested, got everything back into working order.
Related
I am writing a script to go pick up CSV files in a server and copy it inside of a PostgreSQL table. I can't find a way to work around an error raised by Paramiko when copying the file. I don't really understand the error, and I haven't found related posts that could help me solve this issue.
An SFTP connection works well, database remote access works well too, here is the problematic code chunk:
try:
sftp = paramiko.SFTPClient.from_transport(transport)
sftp = transport.open_sftp_client()
print("SFTP Client : Open")
except Exception as e:
msg = "Error connecting via ssh: %s" % e
raise paramiko.SSHException(msg)
XT = "*.csv"
for filename in sftp.listdir(SSH_DIR):
print(filename,'if-found')
print("entered loop")
path = '/%s/%s' % (SSH_DIR, filename)
fobj = sftp.file(os.path.join(SSH_DIR,filename), 'rb')
print(fobj)
cur.execute('TRUNCATE TABLE %s' % TABLE_NAME)
sql = "COPY %s FROM STDIN WITH DELIMITER AS ',' csv header"
table = 'my_table'
cur.copy_expert(sql=sql % table, file=fobj)
conn.commit()
transport.close()
and here is the error raised when executing the script
Database connected...
SSH connection succesful
SFTP Client : Open
kpis_inventory_analysis.csv if-found
entered loop
<paramiko.sftp_file.SFTPFile object at 0x7f0369d81f28>
Exception ignored in: <bound method SFTPFile.__del__ of <paramiko.sftp_file.SFTPFile object at 0x7f0369d81f28>>
Traceback (most recent call last):
File "/home/ubuntu/myenv/lib/python3.6/site-packages/paramiko/sftp_file.py", line 76, in __del__
File "/home/ubuntu/myenv/lib/python3.6/site-packages/paramiko/sftp_file.py", line 108, in _close
TypeError: catching classes that do not inherit from BaseException is not allowed
any help on this issue would be greatly appreciated
I guess the error occurs because you do not close the SFTP files. When the Python garbage collects the Paramiko objects representing the unclosed SFTP files at the end of the script, it fails, as the underlying SFTP connection is closed already.
Make sure you close the file after you stop using it:
with sftp.file(SSH_DIR + "/" + filename, 'rb') as fobj:
print(fobj)
cur.execute('TRUNCATE TABLE %s' % TABLE_NAME)
sql = "COPY %s FROM STDIN WITH DELIMITER AS ',' csv header"
table = 'my_table'
cur.copy_expert(sql=sql % table, file=fobj)
Side note: Do not use os.path.join for SFTP paths. The SFTP paths needs to use the forward slash. While os.path uses a local system separator.
I was using psycopg2 in python script to connect to Redshift database and occasionally I receive the error as below:
psycopg2.OperationalError: SSL SYSCALL error: EOF detected
This error only happened once awhile and 90% of the time the script worked.
I tried to put it into a try and except block to catch the error, but it seems like the catching didn't work. For example, I try to capture the error so that it will automatically send me an email if this happens. However, the email was not sent when error happened. Below are my code for try except:
try:
conn2 = psycopg2.connect(host="localhost", port = '5439',
database="testing", user="admin", password="admin")
except psycopg2.Error as e:
print ("Unable to connect!")
print (e.pgerror)
print (e.diag.message_detail)
# Call check_row_count function to check today's number of rows and send
mail to notify issue
print("Trigger send mail now")
import status_mail
print (status_mail.redshift_failed(YtdDate))
sys.exit(1)
else:
print("RedShift Database Connected")
cur2 = conn2.cursor()
rowcount = cur2.rowcount
Errors I received in my log:
Traceback (most recent call last):
File "/home/ec2-user/dradis/dradisetl-daily.py", line 579, in
load_from_redshift_to_s3()
File "/home/ec2-user/dradis/dradisetl-daily.py", line 106, in load_from_redshift_to_s3
delimiter as ','; """.format(YtdDate, s3location))
psycopg2.OperationalError: SSL SYSCALL error: EOF detected
So the question is, what causes this error and why isn't my try except block catching it?
From the docs:
exception psycopg2.OperationalError
Exception raised for errors that are related to the database’s
operation and not necessarily under the control of the programmer,
e.g. an unexpected disconnect occurs, the data source name is not
found, a transaction could not be processed, a memory allocation error
occurred during processing, etc.
This is an error which can be a result of many different things.
slow query
the process is running out of memory
other queries running causing tables to be locked indefinitely
running out of disk space
firewall
(You should definitely provide more information about these factors and more code.)
You were connected successfully but the OperationalError happened later.
Try to handle these disconnects in your script:
Put the command you want to execute into a try-catch block and try to reconnect if the connection is lost.
Recently encountered this error. The cause in my case was the network instability while working with database. If network will became down for enough time that the socket detect the timeout you will see this error. If down time is not so long you wont see any errors.
You may control timeouts of Keepalive and RTO features using this code sample
s = socket.fromfd(conn.fileno(), socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 6)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 2)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 2)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, 10000)
More information you can find in this post
If you attach the actual code that you are trying to except it would be helpful. In your attached stack trace its : " File "/home/ec2-user/dradis/dradisetl-daily.py", line 106"
Similar except code works fine for me. mind you, e.pgerror will be empty if the error occurred on the client side, such as the error in my example. the e.diag object will also be useless in this case.
try:
conn = psycopg2.connect('')
except psycopg2.Error as e:
print('Unable to connect!\n{0}'.format(e))
else:
print('Connected!')
Maybe it will be helpful for someone but I had such an error when I've tried to restore backup to the database which has not sufficient space for it.
I have a script which connects to remote server using pysftp and does all basic functions such as put,get and listing files in remote server(basic operation between remote and local machine). The script doesn't show me the long listing of files in a folder in remote machine.But it prints me the long listing of files in the current path in local machine. I found it strange and hence have been trying out all possible solutions like cwd,cd,chdir etc. Please find the particular portion of the code and help me resolve the issue.
if (command == 'LIST'):
print"Script will start listing files"
try:
s = pysftp.Connection('ip', username='user', password='pwd')
except Exception, e:
print e
logfile.write("Unable to connect to FTP Server: Erro is-->" + "\n")
sys.exit()
try:
s.cwd('remote_path')
print(s.pwd)
except Exception, e:
print e
logfile.write("Unable to perform cwd:" + "\n")
sys.exit()
try:
print(s.pwd)
print(subprocess.check_output(["ls"]))
except Exception, e:
print "Unable to perform listing of files in Remote Directory"
s.close()
sys.exit()
Thanks and regards,
Shreeram
When using Python to make a connection to ShareFile via implicit FTPS I get the following:
Traceback (most recent call last):
ftps.storbinary("STOR /file, open(file, "rb"), 1024)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ftplib.py", line 769, in storbinary
conn.unwrap()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 791, in unwrap
s = self._sslobj.shutdown()
SSLError: ('The read operation timed out',)
My tyFTP (required because implicit FTPS is not directly supported in ftplib) class comes from here: Python FTP implicit TLS connection issue. Here's the code:
ftps = tyFTP()
try:
ftps.connect(‘ftps.host.domain’, 990)
except:
traceback.print_exc()
traceback.print_stack()
ftps.login(‘uid', ‘pwd')
ftps.prot_p()
try:
ftps.storbinary("STOR /file", open(file, "rb"), 1024)
# i also tried non-binary, but that didn't work either
# ftps.storlines("STOR /file", open(file, "r"))
except:
traceback.print_exc()
traceback.print_stack()
This question has been asked previously, but the only solution provided is to hack the python code. Is that the best/only option?
ShareFile upload with Python 2.7.5 code timesout on FTPS STOR
ftplib - file creation very slow: SSLError: The read operation timed out
ftps.storlines socket.timeout despite file upload completing
There is also an old discussion about this issue on python.org: http://bugs.python.org/issue8108. The suggestion there is that this is an ambiguous situation that's difficult to fix (and maybe never was?)
Please note: I would have added comments to the existing questions, but my reputation was not high enough to comment (new stack exchange user).
sometimes the help you need is your own.
In order to fix this without directly modifying ftplib code (which requires jumping through hoops on a Mac because you cannot easily write/modify files in your /System/Library) I overrode the storbinary method in ftplib.FTP_TLS. That's essentially using this fix for supporting implicit FTPS:
Python FTP implicit TLS connection issue
and then adding these lines to the class tyFTP, and commenting out the conn.unwrap() call, and replacing it with 'pass':
def storbinary(self, cmd, fp, blocksize=8192, callback=None, rest=None):
self.voidcmd('TYPE I')
conn = self.transfercmd(cmd, rest)
try:
while 1:
buf = fp.read(blocksize)
if not buf: break
conn.sendall(buf)
if callback: callback(buf)
if isinstance(conn, ssl.SSLSocket):
pass
# conn.unwrap()
finally:
conn.close()
return self.voidresp()
My issue with implicit ftp over TLS has bothered me for six months. This week, I decided it was time to fix it. Finally, I combined the code from George Leslie-Waksman and gaFF at here, and Manager_of_it here, and it work like champ! Thank you, those three persons.
Hi I am trying to copy a remote file in a server to a local location using Paramiko's SFTP client. Here below is the code.
try:
self.SFTP.get(remotepath, localpath, callback=None)
except IOError as e:
print "File Not Found "+self.location
Remote location doesn't always contains the file requested so I want to print the Error message and end the process.
Unfortunately it prints the message(IOError message) but it also creates the local file with zero size.
Is this a bug or is there any other way to avoid this?
I would use:
sftp.stat(remotepath)
So, in your sample code:
try:
if self.SFTP.stat(remotepath):
self.SFTP.get(remotepath, localpath, callback=None)
except IOError as e:
print "File Not Found "+self.location
SFTP - Paramiko documentation
It's expected.
Instead trying to get a file that you don't know if it exist, I suggest you either:
first try to find it using the Paramiko SFTP listdir command, or
try to get an SFTPFile object from it using Paramiko SFTP file command.
If it fails, the file doesn't exist.
If it succeeds, just close the SFTPFile object, and download the file with the get command.