I am getting the below error when running the Python code:
sftp.put(local_path, remote_path, callback=track_progress, confirm=True)
But if I make confirm=False then this error doesn't come.
Definition of track_progress is as follows:
def track_progress(bytes_transferred, bytes_total):
total_percent = 100
transferred_percent = (bytes_transferred * total_percent) / bytes_total
result_str = f"Filename: {file}, File Size={str(bytes_total)}b |-->
" f" Transfer Details ={str(transferred_percent)}% " \ f"({str(bytes_transferred)}b)Transferred"
#self.logger.info(result_str)
print(result_str)
Can anyone please help me understand the issue here.
Traceback (most recent call last):
File "D:/Users/prpandey/PycharmProjects/PMPPractise/Transport.py", line 59, in <module>
sftp.put(local_path, remote_path, callback=track_progress, confirm=True)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 759, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 720, in putfo
s = self.stat(remotepath)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 495, in stat
raise SFTPError("Expected attributes")
paramiko.sftp.SFTPError: Expected attributes
Paramiko log file:
As suggested, I have tried:
sftp.put(local_path, remote_path, callback=track_progress, confirm=False)
t, msg = sftp._request(CMD_STAT, remote_path)
The t is 101.
When you set confirm=True, the SFTPClient.put asks the server for a size of the just uploaded file. It does that to verify that the file size matches the size of the source local file. See also Paramiko put method throws "[Errno 2] File not found" if SFTP server has trigger to automatically move file upon upload.
The request for size uses SFTP "STAT" message, to which the server should either return "ATTRS" message (file attributes) or an "STATUS" message (101) with an error. Your server seems to return "STATUS" message with "OK" status (my guess based on the data from you and Paramiko source code). The "OK" is an invalid response to the "STAT" request. Paramiko does not expect such nonsense response, so it reports a bit unclear error. But ultimately it's a bug of the server. All you can do is to disable the verification by setting confirm=False.
Related
I'm trying to setup a GRPC client in Python to hit a particular server. The server is setup to require authentication via access token. Therefore, my implementation looks like this:
def create_connection(target, access_token):
credentials = composite_channel_credentials(
ssl_channel_credentials(),
access_token_call_credentials(access_token))
target = target if target else DEFAULT_ENDPOINT
return secure_channel(target = target, credentials = credentials)
conn = create_connection(svc = "myservice", session = Session(client_id = id, client_secret = secret)
stub = FakeStub(conn)
stub.CreateObject(CreateObjectRequest())
The issue I'm having is that, when I attempt to use this connection I get the following error:
File "<stdin>", line 1, in <module>
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 216, in __call__
response, ignored_call = self._with_call(request,
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call
return call.result(), call
File "anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result
raise self
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 266, in with_call
return self._with_call(request,
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call
return call.result(), call
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result
raise self
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 957, in with_call
return _end_unary_response_blocking(state, call, True, None)
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{
"created":"#1633399048.828000000",
"description":"Failed to pick subchannel",
"file":"src/core/ext/filters/client_channel/client_channel.cc",
"file_line":3159,
"referenced_errors":[
{
"created":"#1633399048.828000000",
"description":
"failed to connect to all addresses",
"file":"src/core/lib/transport/error_utils.cc",
"file_line":147,
"grpc_status":14
}
]
}"
I looked up the status code associated with this response and it seems that the server is unavailable. So, I tried waiting for the connection to be ready:
channel_ready_future(conn).result()
but this hangs. What am I doing wrong here?
UPDATE 1
I converted the code to use the async connection instead of the synchronous connection but the issue still persists. Also, I saw that this question had also been posted on SO but none of the solutions presented there fixed the problem I'm having.
UPDATE 2
I assumed that this issue was occurring because the client couldn't find the TLS certificate issued by the server so I added the following code:
def _get_cert(target: str) -> bytes:
split_around_port = target.split(":")
data = ssl.get_server_certificate((split_around_port[0], split_around_port[1]))
return str.encode(data)
and then changed ssl_channel_credentials() to ssl_channel_credentials(_get_cert(target)). However, this also hasn't fixed the problem.
The issue here was actually fairly deep. First, I turned on tracing and set GRPC log-level to debug and then found this line:
D1006 12:01:33.694000000 9032 src/core/lib/security/transport/security_handshaker.cc:182] Security handshake failed: {"created":"#1633489293.693000000","description":"Cannot check peer: missing selected ALPN property.","file":"src/core/lib/security/security_connector/ssl_utils.cc","file_line":160}
This lead me to this GitHub issue, which stated that the issue was with grpcio not inserting the h2 protocol into requests, which would cause ALPN-enabled servers to return that specific error. Some further digging led me to this issue, and since the server I connected to also uses Envoy, it was just a matter of modifying the envoy deployment file so that:
clusters:
- name: my-server
connect_timeout: 10s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: python-server
port_value: 1337
tls_context:
common_tls_context:
tls_certificates:
alpn_protocols: ["h2"] <====== Add this.
I have a python script that I've been using for the last couple of months without any issues.
On my last attempt of running the script I encountered the "please run connect() first" error.
I've reviewed related questions here, but in my case the behaviour is kinda odd.
The e-mail sending function ran twice (as expected) - but I got the error only for the second function call (running in a loop).
Not sure why it would work for the first function call, but not for the second one.
I'll also say that it's not the first time I'm calling the function twice, but it's the first time it failed on the second call.
Hopfully that someone has an idea what could cause the error, and how to fix it.
Thanks in advance for the help.
smtp_host = 'AWS' // Not the real value
port = 465
message = "my message"
server = smtplib.SMTP_SSL(smtp_host, port, 'email.com')
msg = EmailMessage()
msg.set_content(message)
msg['Subject'] = "maintenance"
msg['From'] = 'test1#email.com'
msg['To'] = 'test2#email.com'
server.login(args.email_name, args.email_passwd)
server.send_message(msg)
server.quit()
Edit:
The same issue happened again, this time I was able to pull out the traceback:
Traceback (most recent call last):
File "/home/jenkins/workspace/NOC_Maintenance_Scheduler_master/NOC_Scripts/Maintenance_Scheduler/SQL_Email.py", line 43, in execute_send
server.login(args.email_name, args.email_passwd)
File "/usr/lib/python3.5/smtplib.py", line 693, in login
self.ehlo_or_helo_if_needed()
File "/usr/lib/python3.5/smtplib.py", line 599, in ehlo_or_helo_if_needed
if not (200 <= self.ehlo()[0] <= 299):
File "/usr/lib/python3.5/smtplib.py", line 439, in ehlo
self.putcmd(self.ehlo_msg, name or self.local_hostname)
File "/usr/lib/python3.5/smtplib.py", line 366, in putcmd
self.send(str)
File "/usr/lib/python3.5/smtplib.py", line 358, in send
raise SMTPServerDisconnected('please run connect() first')
smtplib.SMTPServerDisconnected: please run connect() first
I'm trying to build a list of files in a particular directory on an SFTP server and capture some of the attributes of said files. There's an issue that has been inconsistently popping up when connecting to the server, and I've been unable to find a solution. I say the issue is inconsistent because I can run my Databricks notebook one minute and have it return this particular error but then run it a few minutes later and have it complete successfully with absolutely no changes made to the notebook at all.
from base64 import decodebytes
import paramiko
import pysftp
keydata=b"""host key here"""
key = paramiko.RSAKey(data=decodebytes(keydata))
cnopts = pysftp.CnOpts()
cnopts.hostkeys.add('123.456.7.890', 'ssh-rsa', key)
hostname = "123.456.7.890"
user = "username"
pw = "password"
with pysftp.Connection(host=hostname, username=user, password=pw, cnopts=cnopts) as sftp:
* actions once the connection has been established *
I get the below error message (when it does error out), and it flags the final line of code where I establish the SFTP connection as the culprit. I am unable to reproduce this error on demand. As I said, the code will sometimes run flawlessly and other times return the below error, even though I'm making no changes to the code between runs whatsoever.
Unknown exception: from_buffer() cannot return the address of the raw string within a bytes or unicode object
Traceback (most recent call last):
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/transport.py", line 2075, in run
self.kex_engine.parse_next(ptype, m)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/kex_curve25519.py", line 64, in parse_next
return self._parse_kexecdh_reply(m)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/kex_curve25519.py", line 128, in _parse_kexecdh_reply
self.transport._verify_key(peer_host_key_bytes, sig)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/transport.py", line 1886, in _verify_key
if not key.verify_ssh_sig(self.H, Message(sig)):
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/rsakey.py", line 134, in verify_ssh_sig
msg.get_binary(), data, padding.PKCS1v15(), hashes.SHA1()
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py", line 474, in verify
self._backend, data, algorithm
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/backends/openssl/utils.py", line 41, in _calculate_digest_and_algorithm
hash_ctx.update(data)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/primitives/hashes.py", line 93, in update
self._ctx.update(data)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/backends/openssl/hashes.py", line 50, in update
data_ptr = self._backend._ffi.from_buffer(data)
TypeError: from_buffer() cannot return the address of the raw string within a bytes or unicode object
I am using pysftp to connect to a server and upload a file to it.
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
self.sftp = pysftp.Connection(host=self.serverConnectionAuth['host'], port=self.serverConnectionAuth['port'],
username=self.serverConnectionAuth['username'], password=self.serverConnectionAuth['password'],
cnopts=cnopts)
self.sftp.put(localpath=self.filepath+filename, remotepath=filename)
Sometimes it does okay with no error, but sometime it puts the file correctly, BUT raises the following exception. The file is read and processed by another program running on the server, so I can see that the file is there and it is not corrupted
File "E:\Anaconda\envs\py35\lib\site-packages\pysftp\__init__.py", line 364, in put
confirm=confirm)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 727, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 689, in putfo
s = self.stat(remotepath)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 460, in stat
t, msg = self._request(CMD_STAT, path)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 780, in _request
return self._read_response(num)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 832, in _read_response
self._convert_status(msg)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 861, in _convert_status
raise IOError(errno.ENOENT, text)
FileNotFoundError: [Errno 2] No such file
How can I prevent the exception?
From the described behaviour, I assume that the file is removed very shortly after it is uploaded by some server-side process.
By default pysftp.Connection.put verifies the upload by checking a size of the target file. If the server-side processes manages to remove the file too fast, reading the file size would fail.
You can disable the post-upload check by setting confirm parameter to False:
self.sftp.put(localpath=self.filepath+filename, remotepath=filename, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about Paramiko (which pysftp uses internally), see:
Paramiko put method throws "[Errno 2] File not found" if SFTP server has trigger to automatically move file upon upload
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run to verify the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
I've written my first Python application with the App Engine APIs, it is intended to monitor a list of servers and notify me when one of them goes down, by sending a message to my iPhone using Prowl, or sending me an email, or both.
Problem is, a few times a week it notifies me a server is down even when it clearly isn't. I've tested it with servers i know should be up virtually all the time like google.com or amazon.com but i get notifications with them too.
I've got a copy of the code running at http://aeservmon.appspot.com, you can see that google.com was added Jan 3rd but is only listed as being up for 6 days.
Below is the relevant section of the code from checkservers.py that does the checking with urlfetch, i assumed that the DownloadError exception would only be raised when the server couldn't be contacted, but perhaps I'm wrong.
What am I missing?
Full source on github under mrsteveman1/aeservmon (i can only post one link as a new user, sorry!)
def testserver(self,server):
if server.ssl:
prefix = "https://"
else:
prefix = "http://"
try:
url = prefix + "%s" % server.serverdomain
result = urlfetch.fetch(url, headers = {'Cache-Control' : 'max-age=30'} )
except DownloadError:
logging.info('%s could not be reached' % server.serverdomain)
self.serverisdown(server,000)
return
if result.status_code == 500:
logging.info('%s returned 500' % server.serverdomain)
self.serverisdown(server,result.status_code)
else:
logging.info('%s is up, status code %s' % (server.serverdomain,result.status_code))
self.serverisup(server,result.status_code)
UPDATE Jan 21:
Today I found one of the exceptions in the logs:
ApplicationError: 5
Traceback (most recent call last):
File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "/base/data/home/apps/aeservmon/1.339312180538855414/checkservers.py", line 149, in get
self.testserver(server)
File "/base/data/home/apps/aeservmon/1.339312180538855414/checkservers.py", line 106, in testserver
result = urlfetch.fetch(url, headers = {'Cache-Control' : 'max-age=30'} )
File "/base/python_lib/versions/1/google/appengine/api/urlfetch.py", line 241, in fetch
return rpc.get_result()
File "/base/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 501, in get_result
return self.__get_result_hook(self)
File "/base/python_lib/versions/1/google/appengine/api/urlfetch.py", line 331, in _get_fetch_result
raise DownloadError(str(err))
DownloadError: ApplicationError: 5
other folks have been reporting issues with the fetch service (e.g. http://code.google.com/p/googleappengine/issues/detail?id=1902&q=urlfetch&colspec=ID%20Type%20Status%20Priority%20Stars%20Owner%20Summary%20Log%20Component)
can you print the exception, it may have more detail, e.g.:
"DownloadError: ApplicationError: 2 something bad"