The following code:
t = paramiko.Transport((hostname, port))
t.connect(username=username, password=password)
sftp = t.open_sftp_client()
Raises this exception:
Traceback (most recent call last):
File "C:\Users\elqstux\workspace\WxPython\FetcchFile.py", line 41, in <module>
sftp = t.open_sftp_client()
File "C:\Python27\lib\site-packages\paramiko\transport.py", line 845, in open_sftp_client
return SFTPClient.from_transport(self)
File "C:\Python27\lib\site-packages\paramiko\sftp_client.py", line 106, in from_transport
return cls(chan)
File "C:\Python27\lib\site-packages\paramiko\sftp_client.py", line 87, in __init__
server_version = self._send_version()
File "C:\Python27\lib\site-packages\paramiko\sftp.py", line 108, in _send_version
t, data = self._read_packet()
File "C:\Python27\lib\site-packages\paramiko\sftp.py", line 179, in _read_packet
raise SFTPError('Garbage packet received')
SFTPError: Garbage packet received
My host's ip is 147.214.16.150, I use use this command to test in console:
esekilvxen245 [11:03am] [/home/elqstux] -> sftp 147.214.16.150
Connecting to 147.214.16.150...
These computer resources, specifically Internet access and E-mail, are
provided for authorized users only. For legal, security and cost
reasons, utilization and access of resources are monitored and recorded
in log files. All information (whether business or personal) that is
created, received, downloaded, stored, sent or otherwise processed can
be accessed, reviewed, copied, recorded or deleted by Ericsson, in
accordance with approved internal procedures, at any time if deemed
necessary or appropriate, and without advance notice. Any evidence of
unauthorized access or misuse of Ericsson resources may result in
disciplinary actions, including termination of employment or assignment,
and could subject a user to criminal prosecution. Your use of Ericsson's
computer resources constitutes your consent to Ericsson's Policies and
Directives, including the provisions stated above.
IF YOU ARE NOT AN AUTHORIZED USER, PLEASE EXIT IMMEDIATELY
Enter Windows Password:
Received message too long 1131770482
I had a similar problem, and found that it was due to output from a program 'gvm'.
I fixed it by changing my .bashrc file:
#THIS MUST BE AT THE END OF THE FILE FOR GVM TO WORK!!!
#[[ -s "/home/micron/.gvm/bin/gvm-init.sh" ]] && source "/home/micron/.gvm/bin/gvm-init.sh" <== commented this out.
Related
I am getting the below error when running the Python code:
sftp.put(local_path, remote_path, callback=track_progress, confirm=True)
But if I make confirm=False then this error doesn't come.
Definition of track_progress is as follows:
def track_progress(bytes_transferred, bytes_total):
total_percent = 100
transferred_percent = (bytes_transferred * total_percent) / bytes_total
result_str = f"Filename: {file}, File Size={str(bytes_total)}b |-->
" f" Transfer Details ={str(transferred_percent)}% " \ f"({str(bytes_transferred)}b)Transferred"
#self.logger.info(result_str)
print(result_str)
Can anyone please help me understand the issue here.
Traceback (most recent call last):
File "D:/Users/prpandey/PycharmProjects/PMPPractise/Transport.py", line 59, in <module>
sftp.put(local_path, remote_path, callback=track_progress, confirm=True)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 759, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 720, in putfo
s = self.stat(remotepath)
File "D:\Users\prpandey\PycharmProjects\PMPPractise\venv\lib\site-packages\paramiko\sftp_client.py", line 495, in stat
raise SFTPError("Expected attributes")
paramiko.sftp.SFTPError: Expected attributes
Paramiko log file:
As suggested, I have tried:
sftp.put(local_path, remote_path, callback=track_progress, confirm=False)
t, msg = sftp._request(CMD_STAT, remote_path)
The t is 101.
When you set confirm=True, the SFTPClient.put asks the server for a size of the just uploaded file. It does that to verify that the file size matches the size of the source local file. See also Paramiko put method throws "[Errno 2] File not found" if SFTP server has trigger to automatically move file upon upload.
The request for size uses SFTP "STAT" message, to which the server should either return "ATTRS" message (file attributes) or an "STATUS" message (101) with an error. Your server seems to return "STATUS" message with "OK" status (my guess based on the data from you and Paramiko source code). The "OK" is an invalid response to the "STAT" request. Paramiko does not expect such nonsense response, so it reports a bit unclear error. But ultimately it's a bug of the server. All you can do is to disable the verification by setting confirm=False.
I'm trying to build a list of files in a particular directory on an SFTP server and capture some of the attributes of said files. There's an issue that has been inconsistently popping up when connecting to the server, and I've been unable to find a solution. I say the issue is inconsistent because I can run my Databricks notebook one minute and have it return this particular error but then run it a few minutes later and have it complete successfully with absolutely no changes made to the notebook at all.
from base64 import decodebytes
import paramiko
import pysftp
keydata=b"""host key here"""
key = paramiko.RSAKey(data=decodebytes(keydata))
cnopts = pysftp.CnOpts()
cnopts.hostkeys.add('123.456.7.890', 'ssh-rsa', key)
hostname = "123.456.7.890"
user = "username"
pw = "password"
with pysftp.Connection(host=hostname, username=user, password=pw, cnopts=cnopts) as sftp:
* actions once the connection has been established *
I get the below error message (when it does error out), and it flags the final line of code where I establish the SFTP connection as the culprit. I am unable to reproduce this error on demand. As I said, the code will sometimes run flawlessly and other times return the below error, even though I'm making no changes to the code between runs whatsoever.
Unknown exception: from_buffer() cannot return the address of the raw string within a bytes or unicode object
Traceback (most recent call last):
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/transport.py", line 2075, in run
self.kex_engine.parse_next(ptype, m)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/kex_curve25519.py", line 64, in parse_next
return self._parse_kexecdh_reply(m)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/kex_curve25519.py", line 128, in _parse_kexecdh_reply
self.transport._verify_key(peer_host_key_bytes, sig)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/transport.py", line 1886, in _verify_key
if not key.verify_ssh_sig(self.H, Message(sig)):
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/paramiko/rsakey.py", line 134, in verify_ssh_sig
msg.get_binary(), data, padding.PKCS1v15(), hashes.SHA1()
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py", line 474, in verify
self._backend, data, algorithm
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/backends/openssl/utils.py", line 41, in _calculate_digest_and_algorithm
hash_ctx.update(data)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/primitives/hashes.py", line 93, in update
self._ctx.update(data)
File "/local_disk0/pythonVirtualEnvDirs/virtualEnv-a488e5a9-de49-48a7-b684-893822004827/lib/python3.5/site-packages/cryptography/hazmat/backends/openssl/hashes.py", line 50, in update
data_ptr = self._backend._ffi.from_buffer(data)
TypeError: from_buffer() cannot return the address of the raw string within a bytes or unicode object
I am trying to run a simple script from
https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/data_analysis/lab2/python/grepc.py
(this code is a Dataflow pipeline connecting to Google Storage)
It worked last week. But when I am running it now, I always get the same error:
> Traceback (most recent call last):
File "grepc.py", line 50, in <module>
run()
File "grepc.py", line 44, in run
| 'write' >> beam.io.WriteToText(output_prefix)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/textio.py", line 391, in __init__
skip_header_lines=skip_header_lines)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/textio.py", line 89, in __init__
validate=validate)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/filebasedsource.py", line 105, in __init__
self._validate()
File "/usr/local/lib/python2.7/dist-packages/apache_beam/options/value_provider.py", line 109, in _f
return fnc(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/filebasedsource.py", line 165, in _validate
match_result = FileSystems.match([pattern], limits=[1])[0]
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/filesystems.py", line 131, in match
return filesystem.match(patterns, limits)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/gcsfilesystem.py", line 138, in match
raise BeamIOError("Match operation failed", exceptions)
apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://{MY_BUCKET}/javahelp/*.java': HttpAccessTokenRefreshError(u' This can occur if a VM was created with no service account or scopes.',)}
I have no idea how to solve this. And a lot of Googling did not help neither.
Acquiring new user credentials to use for Application Default Credentials fixed my problem.
This is what I used
gcloud auth application-default login
It is well documented here https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login
and you can find the explanation here : This command is useful when you are developing code that would normally use a service account but need to run the code in a local development environment where it's easier to provide user credentials. The credentials will apply to all API calls that make use of the Application Default Credentials client library
Another solution I found was : download keyfile for the compute engine service account and export GOOGLE_APPLICATION_CREDENTIALS to point towards the keyfile
I'm assuming the {MYBUCKET} in the error message is not literal and was replaced by your bucket name.
If you're running this from a GCE VM instance, can you run this command and paste the output here?
gcloud compute instances describe {instance-name} --zone {instance-zone}
The above would tell you what service accounts and scopes your VM instance has. And also:
gcloud projects get-iam-policy {project-name}
This would tell you what service accounts your project has. Please wipe out the project number or any info that you deem sensitive.
I'm trying to make MQTTtoROS Bridge work, and i keep getting this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2627, in _thread_main
self.loop_forever(retry_first_connection=True)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1407, in loop_forever
rc = self.loop(timeout, max_packets)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 923, in loop
rc = self.loop_read(max_packets)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1206, in loop_read
rc = self._packet_read()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1799, in _packet_read
rc = self._packet_handle()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2275, in _packet_handle
return self._handle_publish()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2461, in _handle_publish
self._handle_on_message(message)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2615, in _handle_on_message
t[1](self, self._userdata, message)
File "/home/animu/catkin_ws/src/mqtt_bridge-master/src/mqtt_bridge/bridge.py", line 114, in _callback_mqtt
ros_msg = self._create_ros_message(mqtt_msg)
File "/home/animu/catkin_ws/src/mqtt_bridge-master/src/mqtt_bridge/bridge.py", line 124, in _create_ros_message
msg_dict = self._deserialize(mqtt_msg.payload)
File "msgpack/_unpacker.pyx", line 143, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:2143)
ExtraData: unpack(b) received extra data.
I can't find anything on it in the internet, as this bridge is i guess not commonly used. The only similar problems were in Salt and Kafka, but the solution is nowhere to be found. All python libraries are up to date, i double checked. The bridge sends messages from RoS to MQTT without any problems, both STR and BOOL types. Any message sent from MQTT ends up as this error with no reception from ROS.
Its a bit late but I'll give some advice for future readers.
First, make sure you have installed all requirements for the bridge to function. Check them by reading requirement.txt
Second, Edit mqtt_bridge configuration file to match topics from ROS and from your MQTT server. Also IP address/port of MQTT server.
Thats it.
Since Animu is a student, I assume that this was an assignment from his training or internship company. An answer is probably no use anymore, but since I also had this problem, I hereby offer a solution for future readers:
In the bridge repository there is a file called "demo_params.yaml". Or if you have already named it differently, then the .yaml file that contains your "settings".
This file includes the following:
mqtt:
client:
protocol: 4 # MQTTv311
connection:
host: localhost
postage: 1883
keepalive: 60
private_path: device / 001
serializer: msgpack: dumps
deserializer: msgpack: loads
bridge:
# ping pong
- factory: mqtt_bridge.bridge: RosToMqttBridge
msg_type: std_msgs.msg: Bool
topic_from: / ping
topic_to: ping
- factory: mqtt_bridge.bridge: MqttToRosBridge
msg_type: std_msgs.msg: Bool
topic_from: ping
topic_to: / pong
As you can see, it says msgpack is used to serialize and deserialize your messages that are sent back and forth. This mainly works for ROS to MQTT. The other way around, this does not work, as no correct actions are performed in the Python code. You have two solutions for this.
Continue to work with msgpack and make sure that the MQTT messages you publish are already encoded as msgpack would serialize them itself. (binary). This is a tricky solution, since you just want to keep MQTT messages human readable if you publish them manually. If you have written another program that publishes the MQTT messages, feel free to serialize the message first with msgpack and then publish. Then the bridge also works.
The other option is to have JSON serialization, instead of msgpack serialization. This is the default option of the bridge, but you can also specify this in your .yaml file. You do this by editing this:
serializer: msgpack: dumps
deserializer: msgpack: loads
to this:
serializer: json: dumps
deserializer: json: loads
Now you can publish mqtt messages both manually and with the help of software.
You do this as follows:
mosquitto_pub -t 'echo' -m '{"data": "test"}
I've been googling for days trying to find a straight answer for why this is happening, but can't find anything useful. I have a web2py application that simply reads a database and makes some requests to a REST api. It is a healthcheck monitor so it refreshes itself every minute. There are about 20 or so users at any given time. Here is the error I'm seeing very consistently in the log file:
ERROR:Rocket.Errors.Port8080:Traceback (most recent call last):
File "/opt/apps/web2py/gluon/rocket.py", line 562, in listen
sock = self.wrap_socket(sock)
File "/opt/apps/web2py/gluon/rocket.py", line 506, in wrap_socket
ssl_version = ssl.PROTOCOL_SSLv23)
File "/usr/local/lib/python2.7/ssl.py", line 342, in wrap_socket
ciphers=ciphers)
File "/usr/local/lib/python2.7/ssl.py", line 121, in __init__
self.do_handshake()
File "/usr/local/lib/python2.7/ssl.py", line 281, in do_handshake
self._sslobj.do_handshake()
error: [Errno 104] Connection reset by peer
Based on some googling the most promising piece of information is that someone is trying to connect through a firewall and so it is killing the connection, however I don't understand why it's taking the actual application down. The process is still running, but no one can connect and I have to restart web2py.
I will be very appreciative of any input here. I'm beyond frustration.
Thanks!
The most common source of Connection reset by peer errors is that the remote client decides he doesn't want to contact you anymore, and cancels the interaction (with shutdown/an RST packet). This happens if the user navigates to a different page while the site is loading.
In your case, the remote host gave up on the connection even before you got to read or write anything on it. With the current web2py, this should only output the warning you're seeing, and not terminate anything.
If you have the current web2py, the error of not being able to connect is unrelated to these error messages. If you have an old version of web2py, you should update.