I'm trying to make MQTTtoROS Bridge work, and i keep getting this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2627, in _thread_main
self.loop_forever(retry_first_connection=True)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1407, in loop_forever
rc = self.loop(timeout, max_packets)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 923, in loop
rc = self.loop_read(max_packets)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1206, in loop_read
rc = self._packet_read()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1799, in _packet_read
rc = self._packet_handle()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2275, in _packet_handle
return self._handle_publish()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2461, in _handle_publish
self._handle_on_message(message)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2615, in _handle_on_message
t[1](self, self._userdata, message)
File "/home/animu/catkin_ws/src/mqtt_bridge-master/src/mqtt_bridge/bridge.py", line 114, in _callback_mqtt
ros_msg = self._create_ros_message(mqtt_msg)
File "/home/animu/catkin_ws/src/mqtt_bridge-master/src/mqtt_bridge/bridge.py", line 124, in _create_ros_message
msg_dict = self._deserialize(mqtt_msg.payload)
File "msgpack/_unpacker.pyx", line 143, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:2143)
ExtraData: unpack(b) received extra data.
I can't find anything on it in the internet, as this bridge is i guess not commonly used. The only similar problems were in Salt and Kafka, but the solution is nowhere to be found. All python libraries are up to date, i double checked. The bridge sends messages from RoS to MQTT without any problems, both STR and BOOL types. Any message sent from MQTT ends up as this error with no reception from ROS.
Its a bit late but I'll give some advice for future readers.
First, make sure you have installed all requirements for the bridge to function. Check them by reading requirement.txt
Second, Edit mqtt_bridge configuration file to match topics from ROS and from your MQTT server. Also IP address/port of MQTT server.
Thats it.
Since Animu is a student, I assume that this was an assignment from his training or internship company. An answer is probably no use anymore, but since I also had this problem, I hereby offer a solution for future readers:
In the bridge repository there is a file called "demo_params.yaml". Or if you have already named it differently, then the .yaml file that contains your "settings".
This file includes the following:
mqtt:
client:
protocol: 4 # MQTTv311
connection:
host: localhost
postage: 1883
keepalive: 60
private_path: device / 001
serializer: msgpack: dumps
deserializer: msgpack: loads
bridge:
# ping pong
- factory: mqtt_bridge.bridge: RosToMqttBridge
msg_type: std_msgs.msg: Bool
topic_from: / ping
topic_to: ping
- factory: mqtt_bridge.bridge: MqttToRosBridge
msg_type: std_msgs.msg: Bool
topic_from: ping
topic_to: / pong
As you can see, it says msgpack is used to serialize and deserialize your messages that are sent back and forth. This mainly works for ROS to MQTT. The other way around, this does not work, as no correct actions are performed in the Python code. You have two solutions for this.
Continue to work with msgpack and make sure that the MQTT messages you publish are already encoded as msgpack would serialize them itself. (binary). This is a tricky solution, since you just want to keep MQTT messages human readable if you publish them manually. If you have written another program that publishes the MQTT messages, feel free to serialize the message first with msgpack and then publish. Then the bridge also works.
The other option is to have JSON serialization, instead of msgpack serialization. This is the default option of the bridge, but you can also specify this in your .yaml file. You do this by editing this:
serializer: msgpack: dumps
deserializer: msgpack: loads
to this:
serializer: json: dumps
deserializer: json: loads
Now you can publish mqtt messages both manually and with the help of software.
You do this as follows:
mosquitto_pub -t 'echo' -m '{"data": "test"}
Related
I need a simple reverse proxy for Python 3.
I like Twisted and its simple reverse http proxy (http://twistedmatrix.com/documents/14.0.1/_downloads/reverse-proxy.py) ...
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
This example demonstrates how to run a reverse proxy.
Run this example with:
$ python reverse-proxy.py
Then visit http://localhost:8080/ in your web browser.
"""
from twisted.internet import reactor
from twisted.web import proxy, server
site = server.Site(proxy.ReverseProxyResource('www.yahoo.com', 80, ''))
reactor.listenTCP(8080, site)
reactor.run()
.... but it throws error in Python 3.
Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/twisted/protocols/basic.py", line 571, in dataReceived
why = self.lineReceived(line)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/http.py", line 1752, in lineReceived
self.allContentReceived()
File "/usr/local/lib/python3.4/dist-packages/twisted/web/http.py", line 1845, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/http.py", line 766, in requestReceived
self.process()
--- <exception caught here> ---
File "/usr/local/lib/python3.4/dist-packages/twisted/web/server.py", line 185, in process
resrc = self.site.getResourceFor(self)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/server.py", line 791, in getResourceFor
return resource.getChildForRequest(self.resource, request)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/resource.py", line 98, in getChildForRequest
resource = resource.getChildWithDefault(pathElement, request)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/resource.py", line 201, in getChildWithDefault
return self.getChild(path, request)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/proxy.py", line 278, in getChild
self.host, self.port, self.path + b'/' + urlquote(path, safe=b"").encode('utf-8'),
builtins.TypeError: Can't convert 'bytes' object to str implicitly
Is there something similar that works in python 3?
Having stumbled into this problem myself, I was disappointed to find no answer here. After some digging, I was able to get it working by changing the path argument in proxy.ReverseProxyResource to bytes rather than str, rendering the following line:
site = server.Site(proxy.ReverseProxyResource("www.yahoo.com", 80, b''))
This is necessary because twisted appends a trailing slash as bytes (ie b'/') internally.
I found this simple package on pypi, it seems to be working well and it is similarly simple.
https://pypi.python.org/pypi/maproxy
I have a school project, in which I need to create a DNS server.
(I can only use scapy, no other libs. I have python 2.7 32bit and I use a wired ethernet cable).
If I do have the requested domain name in my database, I just send it back to my client.
If I don't have it, I need to ask 8.8.8.8 (or another server) and then send the response to the client, then store the answer in my cache. Sending the answer works perfectly, but storing the information is the problem.
8.8.8.8's answer packets are sent to me this way: Ether()/IP()/UDP()/Raw(). In order to send the answer to the client I just change the IP and UDP's addresses and ports and just send the Raw layer above them and it works. But, the [Raw].load of the answer packet is encoded in a way I don't understand (and .decode('utf-8'/16/32) didn't work).
For example :
###[ Raw ]###
load = '\x007\x81\x80\x00\x01\x00\x02\x00\x00\x00\x00\x03www\x03aeo
\x03com\x00\x00\x01\x00\x01\xc0\x0c\x00\x01\x00\x01\x00\x00/\xab\x00\x04\xa5\xa0
\x0f\x14\xc0\x0c\x00\x01\x00\x01\x00\x00/\xab\x00\x04\xa5\xa0\r\x14'
I found out that it may be the (str(answerpacket[DNS]) version of it but when I try to convert it back to [DNS] (using (DNS(answerpkt[Raw].load)) it gives me the following error:
File "C:\Python27\lib\site-packages\scapy\base_classes.py", line 198, in __cal
l__
i.__init__(*args, **kargs)
File "C:\Python27\lib\site-packages\scapy\packet.py", line 80, in __init__
self.dissect(_pkt)
File "C:\Python27\lib\site-packages\scapy\packet.py", line 575, in dissect
s = self.do_dissect(s)
File "C:\Python27\lib\site-packages\scapy\packet.py", line 549, in do_dissect
s,fval = f.getfield(self, s)
File "C:\Python27\lib\site-packages\scapy\layers\dns.py", line 145, in getfiel
d
rr,p = self.decodeRR(name, s, p)
File "C:\Python27\lib\site-packages\scapy\layers\dns.py", line 123, in decodeR
R
rr = DNSRR("\x00"+ret+s[p:p+rdlen])
File "C:\Python27\lib\site-packages\scapy\base_classes.py", line 198, in __cal
l__
i.__init__(*args, **kargs)
File "C:\Python27\lib\site-packages\scapy\packet.py", line 80, in __init__
self.dissect(_pkt)
File "C:\Python27\lib\site-packages\scapy\packet.py", line 575, in dissect
s = self.do_dissect(s)
File "C:\Python27\lib\site-packages\scapy\packet.py", line 549, in do_dissect
s,fval = f.getfield(self, s)
File "C:\Python27\lib\site-packages\scapy\fields.py", line 509, in getfield
return s[l:], self.m2i(pkt,s[:l])
File "C:\Python27\lib\site-packages\scapy\layers\dns.py", line 177, in m2i
s = inet_ntop(family, s)
File "C:\Python27\lib\site-packages\scapy\pton_ntop.py", line 66, in inet_ntop
return inet_ntoa(addr)
NameError: global name 'inet_ntoa' is not defined
So my question is- how do I convert that '\x0x0x0x0' string to a DNS layer that I can take information from?
This looks like a bug in scapy that's triggered on Windows machines (http://permalink.gmane.org/gmane.comp.security.scapy.general/4733). Unfortunately, it looks like the bugfix isn't in the latest versions of scapy.
The recommendation in that post is to edit the scapy source directly. In your case, edit C:\Python27\lib\site-packages\scapy\pton_ntop.py and change line 66 from:
return inet_ntoa(addr)
to:
return socket.inet_ntoa(addr)
I'm on version 1.9.9 of the SDK and I'm having issues with the devserver. I have a manually scaled module with 1 instance. I created a webapp2.RequestHandler for /_ah/start. In that handler I start a background thread. When I run my app in the devserver, the _ah/start handler returns a 200, but /_ah/background will randomly return 500 errors for a while. After sometime (usually a minute or two, but sometimes more), the 500 errors stop, but will randomly occur again every few hours. It also seems that everytime I open a new browser tab (Chrome), I get the same error. Anyone know what could be causing this?
Here is the RequestHandler for /_ah/start:
class StartupHandler(webapp2.RequestHandler):
def get(self):
runtime.set_shutdown_hook(shutdown_hook)
global foo
if foo is None:
foo = Foo()
background_thread.start_new_background_thread(do_foo, [])
self.response.http_status_message(200)
Here is the 500 error:
ERROR 2014-08-18 07:39:36,256 module.py:717] Request to '/_ah/background' failed
Traceback (most recent call last):
File "\appengine\tools\devappserver2\module.py", line 694, in _handle_request
environ, wrapped_start_response)
File "\appengine\tools\devappserver2\request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "\appengine\tools\devappserver2\module.py", line 1672, in _handle_script_request
request_type)
File "\appengine\tools\devappserver2\module.py", line 1624, in _handle_instance_request
request_id, request_type)
File "\appengine\tools\devappserver2\instance.py", line 382, in handle
request_type))
File "\appengine\tools\devappserver2\http_proxy.py", line 190, in handle
response = connection.getresponse()
File "E:\Programing\Python27\lib\httplib.py", line 1030, in getresponse
response.begin()
File "E:\Programing\Python27\lib\httplib.py", line 407, in begin
version, status, reason = self._read_status()
File "E:\Programing\Python27\lib\httplib.py", line 365, in _read_status
line = self.fp.readline()
File "E:\Programing\Python27\lib\socket.py", line 430, in readline
data = recv(1)
error: [Errno 10054] An existing connection was forcibly closed by the remote host
INFO 2014-08-18 07:39:36,257 module.py:1890] Waiting for instances to restart
INFO 2014-08-18 07:39:36,262 module.py:642] lease: "GET /_ah/background HTTP/1.1" 500 -
Well this might be not the answer , but how long will it take to complete a specific task to assign to a backend? Seems like an issue with concurrency
Looks like the issue (as far as I can currently tell) is that I'm using PyCharm, which synchronizes the project's files when its window is entered or exited. This rewrites the project files even if there are no changes, which causes the devserver to restart all instances, leading to the 500 errors.
More info on PyCharm Synchronization
Link to issue at PyCharm
I'm using Django 1.6 and Django-ImageKit 3.2.1.
I'm trying to generate images asynchronously with ImageKit. Async image generation works locally but not on the production server.
I'm using Celery and I've tried both:
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Async'
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Celery'
Using the Simple backend (synchronous) instead of Async or Celery works fine on the production server. So I don't understand why the asynchronous backend gives me the following ImportError (pulled from the Celery log):
[2014-04-05 21:51:26,325: CRITICAL/MainProcess] Can't decode message body: DecodeError(ImportError('No module named s3utils',),) [type:u'application/x-python-serialize' encoding:u'binary' headers:{}]
body: '\x80\x02}q\x01(U\x07expiresq\x02NU\x03utcq\x03\x88U\x04argsq\x04cimagekit.cachefiles.backends\nCelery\nq\x05)\x81q\x06}bcimagekit.cachefiles\nImageCacheFile\nq\x07)\x81q\x08}q\t(U\x11cachefile_backendq\nh\x06U\x12ca$
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/messaging.py", line 585, in _receive_callback
decoded = None if on_m else message.decode()
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/message.py", line 142, in decode
self.content_encoding, accept=self.accept)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 59, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 55, in _reraise_errors
yield
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 64, in pickle_loads
return load(BytesIO(s))
DecodeError: No module named s3utils
s3utils is what defines my AWS S3 bucket paths. I'll post it if need be, but the strange thing I think is that the synchronous backend has no problem importing s3utils while the asynchronous does... and asynchronous does ONLY on the production server, not locally.
I'd be SO greatful for any help debugging this. I've been wrestling this for days. I'm still learning Django and python so I'm hoping this is a stupid mistake on my part. My Google-fu has failed me.
As I hinted at in my comment above, this kind of thing is usually caused by forgetting to restart the worker.
It's a common gotcha with Celery. The workers are a separate process from your web server so they have their own versions of your code loaded. And just like with your web server, if you make a change to your code, you need to reload so it sees the change. The web server talks to your worker not by directly running code, but by passing serialized messages via the broker, which will say something like "call the function do_something()". Then the worker will read that message and—and here's the tricky part—call its version of do_something(). So even if you restart your webserver (so that it has a new version of your code), if you forget to reload the worker (which is what actually calls the function), the old version of the function will be called. In other words, you need to restart the worker any time you make a change to your tasks.
You might want to check out the autoreload option for development. It could save you some headaches.
The following code:
t = paramiko.Transport((hostname, port))
t.connect(username=username, password=password)
sftp = t.open_sftp_client()
Raises this exception:
Traceback (most recent call last):
File "C:\Users\elqstux\workspace\WxPython\FetcchFile.py", line 41, in <module>
sftp = t.open_sftp_client()
File "C:\Python27\lib\site-packages\paramiko\transport.py", line 845, in open_sftp_client
return SFTPClient.from_transport(self)
File "C:\Python27\lib\site-packages\paramiko\sftp_client.py", line 106, in from_transport
return cls(chan)
File "C:\Python27\lib\site-packages\paramiko\sftp_client.py", line 87, in __init__
server_version = self._send_version()
File "C:\Python27\lib\site-packages\paramiko\sftp.py", line 108, in _send_version
t, data = self._read_packet()
File "C:\Python27\lib\site-packages\paramiko\sftp.py", line 179, in _read_packet
raise SFTPError('Garbage packet received')
SFTPError: Garbage packet received
My host's ip is 147.214.16.150, I use use this command to test in console:
esekilvxen245 [11:03am] [/home/elqstux] -> sftp 147.214.16.150
Connecting to 147.214.16.150...
These computer resources, specifically Internet access and E-mail, are
provided for authorized users only. For legal, security and cost
reasons, utilization and access of resources are monitored and recorded
in log files. All information (whether business or personal) that is
created, received, downloaded, stored, sent or otherwise processed can
be accessed, reviewed, copied, recorded or deleted by Ericsson, in
accordance with approved internal procedures, at any time if deemed
necessary or appropriate, and without advance notice. Any evidence of
unauthorized access or misuse of Ericsson resources may result in
disciplinary actions, including termination of employment or assignment,
and could subject a user to criminal prosecution. Your use of Ericsson's
computer resources constitutes your consent to Ericsson's Policies and
Directives, including the provisions stated above.
IF YOU ARE NOT AN AUTHORIZED USER, PLEASE EXIT IMMEDIATELY
Enter Windows Password:
Received message too long 1131770482
I had a similar problem, and found that it was due to output from a program 'gvm'.
I fixed it by changing my .bashrc file:
#THIS MUST BE AT THE END OF THE FILE FOR GVM TO WORK!!!
#[[ -s "/home/micron/.gvm/bin/gvm-init.sh" ]] && source "/home/micron/.gvm/bin/gvm-init.sh" <== commented this out.