I'm hitting an error with code that connects to AWS using boto3. The error just started yesterday afternoon, and between the last time I didn't get the error and the first time I got the error I don't see anything that changed.
The error is:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL:
In .aws/config I have:
$ cat ~/.aws/config
[default]
region=us-east-1
Here's what I know:
Using the same AWS credentials and config on another machine, I don't see the error.
Using different AWS credentials and config on the same machine, I do see the error.
I'm the only one in our group that has this issue for any credentials on any machine.
I don't think I changed anything that would affect this between the last time this worked and the first time it didn't. It seems like I'd have had to change some AWS specific configuration on my side or some low level libraries, and I didn't make any such change. I was talking with a colleague for 30-45 minutes and when I returned and picked up where I left off the issue first appeared.
Any thoughts or ideas on troubleshooting this?
Full exception dump follows.
$ python
Python 2.7.10 (default, Jul 14 2015, 19:46:27)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
>>> boto3.client('ec2').describe_regions()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/botocore/client.py", line 200, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/client.py", line 244, in _make_api_call
operation_model, request_dict)
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 173, in make_request
return self._send_request(request_dict, operation_model)
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 203, in _send_request
success_response, exception):
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 267, in _needs_retry
caught_exception=caught_exception)
File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 250, in __call__
caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 273, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 313, in __call__
caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 222, in __call__
return self._check_caught_exception(attempt_number, caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 355, in _check_caught_exception
raise caught_exception
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1.amazonaws.com/"
Issue resolved. It turns out that a couple of seemingly unrelated actions independent of anything boto related resulted in HTTP_PROXY and HTTPS_PROXY environment variables being improperly set, which was then breaking the botocore calls under both boto3 and the aws cli. Removing both environment variables resolved the problem.
I'll leave this up as I found it very difficult to find anything pointing to this as a possible cause of this error. Might save someone else some of the hair pulling I went through.
I just had a similar issue. All of a sudden, no connection possible anymore to my s3 through boto3 on django while I had still the possibility to do the actions on my Heroku environment.
Appeared I recently installed the amazon CLI where my configuration was different and the CLI overrules the environment variables... Damn. took me 3 hours to find.
through aws configure I now set
AWS Access Key ID [****************MPIA]: "your true key here without quotes"
AWS Secret Access Key [****************7DWm]: "your true secret access key here without quotes"
Default region name [eu-west-1]: "your true region here without quotes"
Default output format [None]: [here i did just an enter in order not to change this]
Just posting this for the sake of anyone having this issue.
I came across same error when my connection went down - botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-2.amazonaws.com/"
When connection is restored then it worked without any issue. Probable reason for this error could be
Connection error
Region is not available to cater your request as every request hits endpoint on AWS (more detail can be found on https://docs.aws.amazon.com/general/latest/gr/rande.html#billing-pricing )
It seems like Boto3 has matured enough to throw exception for more proper reason of failure to precisely know what is going.
Also if you have any issue related to your config then most of them are encapsulated with ClientError exception.
Related
I'm trying to make MQTTtoROS Bridge work, and i keep getting this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2627, in _thread_main
self.loop_forever(retry_first_connection=True)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1407, in loop_forever
rc = self.loop(timeout, max_packets)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 923, in loop
rc = self.loop_read(max_packets)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1206, in loop_read
rc = self._packet_read()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 1799, in _packet_read
rc = self._packet_handle()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2275, in _packet_handle
return self._handle_publish()
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2461, in _handle_publish
self._handle_on_message(message)
File "/home/animu/.local/lib/python2.7/site-packages/paho/mqtt/client.py", line 2615, in _handle_on_message
t[1](self, self._userdata, message)
File "/home/animu/catkin_ws/src/mqtt_bridge-master/src/mqtt_bridge/bridge.py", line 114, in _callback_mqtt
ros_msg = self._create_ros_message(mqtt_msg)
File "/home/animu/catkin_ws/src/mqtt_bridge-master/src/mqtt_bridge/bridge.py", line 124, in _create_ros_message
msg_dict = self._deserialize(mqtt_msg.payload)
File "msgpack/_unpacker.pyx", line 143, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:2143)
ExtraData: unpack(b) received extra data.
I can't find anything on it in the internet, as this bridge is i guess not commonly used. The only similar problems were in Salt and Kafka, but the solution is nowhere to be found. All python libraries are up to date, i double checked. The bridge sends messages from RoS to MQTT without any problems, both STR and BOOL types. Any message sent from MQTT ends up as this error with no reception from ROS.
Its a bit late but I'll give some advice for future readers.
First, make sure you have installed all requirements for the bridge to function. Check them by reading requirement.txt
Second, Edit mqtt_bridge configuration file to match topics from ROS and from your MQTT server. Also IP address/port of MQTT server.
Thats it.
Since Animu is a student, I assume that this was an assignment from his training or internship company. An answer is probably no use anymore, but since I also had this problem, I hereby offer a solution for future readers:
In the bridge repository there is a file called "demo_params.yaml". Or if you have already named it differently, then the .yaml file that contains your "settings".
This file includes the following:
mqtt:
client:
protocol: 4 # MQTTv311
connection:
host: localhost
postage: 1883
keepalive: 60
private_path: device / 001
serializer: msgpack: dumps
deserializer: msgpack: loads
bridge:
# ping pong
- factory: mqtt_bridge.bridge: RosToMqttBridge
msg_type: std_msgs.msg: Bool
topic_from: / ping
topic_to: ping
- factory: mqtt_bridge.bridge: MqttToRosBridge
msg_type: std_msgs.msg: Bool
topic_from: ping
topic_to: / pong
As you can see, it says msgpack is used to serialize and deserialize your messages that are sent back and forth. This mainly works for ROS to MQTT. The other way around, this does not work, as no correct actions are performed in the Python code. You have two solutions for this.
Continue to work with msgpack and make sure that the MQTT messages you publish are already encoded as msgpack would serialize them itself. (binary). This is a tricky solution, since you just want to keep MQTT messages human readable if you publish them manually. If you have written another program that publishes the MQTT messages, feel free to serialize the message first with msgpack and then publish. Then the bridge also works.
The other option is to have JSON serialization, instead of msgpack serialization. This is the default option of the bridge, but you can also specify this in your .yaml file. You do this by editing this:
serializer: msgpack: dumps
deserializer: msgpack: loads
to this:
serializer: json: dumps
deserializer: json: loads
Now you can publish mqtt messages both manually and with the help of software.
You do this as follows:
mosquitto_pub -t 'echo' -m '{"data": "test"}
I'm on version 1.9.9 of the SDK and I'm having issues with the devserver. I have a manually scaled module with 1 instance. I created a webapp2.RequestHandler for /_ah/start. In that handler I start a background thread. When I run my app in the devserver, the _ah/start handler returns a 200, but /_ah/background will randomly return 500 errors for a while. After sometime (usually a minute or two, but sometimes more), the 500 errors stop, but will randomly occur again every few hours. It also seems that everytime I open a new browser tab (Chrome), I get the same error. Anyone know what could be causing this?
Here is the RequestHandler for /_ah/start:
class StartupHandler(webapp2.RequestHandler):
def get(self):
runtime.set_shutdown_hook(shutdown_hook)
global foo
if foo is None:
foo = Foo()
background_thread.start_new_background_thread(do_foo, [])
self.response.http_status_message(200)
Here is the 500 error:
ERROR 2014-08-18 07:39:36,256 module.py:717] Request to '/_ah/background' failed
Traceback (most recent call last):
File "\appengine\tools\devappserver2\module.py", line 694, in _handle_request
environ, wrapped_start_response)
File "\appengine\tools\devappserver2\request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "\appengine\tools\devappserver2\module.py", line 1672, in _handle_script_request
request_type)
File "\appengine\tools\devappserver2\module.py", line 1624, in _handle_instance_request
request_id, request_type)
File "\appengine\tools\devappserver2\instance.py", line 382, in handle
request_type))
File "\appengine\tools\devappserver2\http_proxy.py", line 190, in handle
response = connection.getresponse()
File "E:\Programing\Python27\lib\httplib.py", line 1030, in getresponse
response.begin()
File "E:\Programing\Python27\lib\httplib.py", line 407, in begin
version, status, reason = self._read_status()
File "E:\Programing\Python27\lib\httplib.py", line 365, in _read_status
line = self.fp.readline()
File "E:\Programing\Python27\lib\socket.py", line 430, in readline
data = recv(1)
error: [Errno 10054] An existing connection was forcibly closed by the remote host
INFO 2014-08-18 07:39:36,257 module.py:1890] Waiting for instances to restart
INFO 2014-08-18 07:39:36,262 module.py:642] lease: "GET /_ah/background HTTP/1.1" 500 -
Well this might be not the answer , but how long will it take to complete a specific task to assign to a backend? Seems like an issue with concurrency
Looks like the issue (as far as I can currently tell) is that I'm using PyCharm, which synchronizes the project's files when its window is entered or exited. This rewrites the project files even if there are no changes, which causes the devserver to restart all instances, leading to the 500 errors.
More info on PyCharm Synchronization
Link to issue at PyCharm
I'm using Django 1.6 and Django-ImageKit 3.2.1.
I'm trying to generate images asynchronously with ImageKit. Async image generation works locally but not on the production server.
I'm using Celery and I've tried both:
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Async'
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Celery'
Using the Simple backend (synchronous) instead of Async or Celery works fine on the production server. So I don't understand why the asynchronous backend gives me the following ImportError (pulled from the Celery log):
[2014-04-05 21:51:26,325: CRITICAL/MainProcess] Can't decode message body: DecodeError(ImportError('No module named s3utils',),) [type:u'application/x-python-serialize' encoding:u'binary' headers:{}]
body: '\x80\x02}q\x01(U\x07expiresq\x02NU\x03utcq\x03\x88U\x04argsq\x04cimagekit.cachefiles.backends\nCelery\nq\x05)\x81q\x06}bcimagekit.cachefiles\nImageCacheFile\nq\x07)\x81q\x08}q\t(U\x11cachefile_backendq\nh\x06U\x12ca$
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/messaging.py", line 585, in _receive_callback
decoded = None if on_m else message.decode()
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/message.py", line 142, in decode
self.content_encoding, accept=self.accept)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 59, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 55, in _reraise_errors
yield
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 64, in pickle_loads
return load(BytesIO(s))
DecodeError: No module named s3utils
s3utils is what defines my AWS S3 bucket paths. I'll post it if need be, but the strange thing I think is that the synchronous backend has no problem importing s3utils while the asynchronous does... and asynchronous does ONLY on the production server, not locally.
I'd be SO greatful for any help debugging this. I've been wrestling this for days. I'm still learning Django and python so I'm hoping this is a stupid mistake on my part. My Google-fu has failed me.
As I hinted at in my comment above, this kind of thing is usually caused by forgetting to restart the worker.
It's a common gotcha with Celery. The workers are a separate process from your web server so they have their own versions of your code loaded. And just like with your web server, if you make a change to your code, you need to reload so it sees the change. The web server talks to your worker not by directly running code, but by passing serialized messages via the broker, which will say something like "call the function do_something()". Then the worker will read that message and—and here's the tricky part—call its version of do_something(). So even if you restart your webserver (so that it has a new version of your code), if you forget to reload the worker (which is what actually calls the function), the old version of the function will be called. In other words, you need to restart the worker any time you make a change to your tasks.
You might want to check out the autoreload option for development. It could save you some headaches.
After updating from 1.7.5 (where everything worked fine) I'm getting a HTTP Error 403: Forbidden when trying to open any sites via localhost. Strange thing is I have pretty much the same setup at home as here at work and everything works there... Might be an issue with proxy server we're using at work, since that's the only difference I can think of? Here's the error log I'm getting, so if anyone knows what's going on please help (;
Traceback (most recent call last):
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 1302, in communicate
req.respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\wsgi_server.py", line 246, in __call__
return app(environ, start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 89, in __call__
self._flush_logs(response.get('logs', []))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 220, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 320, in MakeSyncCall
rpc.CheckSuccess()
File "U:\Dev\GAE\google\appengine\api\apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "U:\Dev\GAE\google\appengine\tools\appengine_rpc.py", line 393, in Send
f = self.opener.open(req)
File "U:\Dev\Python\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "U:\Dev\Python\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "U:\Dev\Python\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "U:\Dev\Python\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "U:\Dev\Python\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
INFO 2013-04-19 12:28:52,576 server.py:561] default: "GET / HTTP/1.1" 500 -
INFO 2013-04-19 12:28:52,619 server.py:561] default: "GET /favicon.ico HTTP/1.1" 304 -
Also, the launcher throws an error when closing:
Traceback (most recent call last):
File "launcher\mainframe.pyc", line 327, in OnStop
File "launcher\taskcontroller.pyc", line 167, in Stop
File "launcher\dev_appserver_task_thread.pyc", line 82, in stop
File "launcher\taskthread.pyc", line 107, in stop
File "launcher\platform.pyc", line 397, in KillProcess
pywintypes.error: (5, 'TerminateProcess', 'Access is denied.')
I had this very same issue with my MacOSX when using a proxy server using Google App Engine Launcher 1.8.6. Apparently there's an issue with "proxy_bypass" on "urllib2.py".
There are two possible solutions:
Downgrade to 1.7.5, but, who wants to downgrade?
Edit "[GAE Instalattion path]/google/appengine/tools/appengine_rpc.py" and look for the line that says
opener.add_handler(fancy_urllib.FancyProxyHandler())
In my computer it was line 578, and then put a hash (#) at the beginning of the line, like this:
`#opener.add_handler(fancy_urllib.FancyProxyHandler())`
Save the file, stop and then restart your application. Now dev_appserver.py shouldn't try to use any proxy server at all.
If your application uses any external resources like a SOAP Webservice or something like that and you can't reach the server without the proxy server, then you'll have to downgrade. Please keep in mind that external javascript files (like facebook SDK or similar) are loaded from your browser, not from your application.
Since I'm not using any external REST or SOAP services it worked for me!
Hopefully it will work for you as well.
Try either:
-Accessing it through a different proxy. I.E a . proxy within a proxy
-Accessing it through your local IP i.e 192.168.1.1
I faced the same issue with version 1.9.5. Seems that the API proxy is sending some RPCs to the proxy server, which are then being rejected with HTTP 403 (since proxy servers are generally configured to reject connection attempts to arbitrary ports). In my case I was using the urlfetch module in my app to access external web pages, so disabling the proxy server was not a choice for me.
This is how I worked around the issue some time back (most probably it was based on comments found under this issue, but I cannot remember the exact sources).
NOTE:
For this approach to work, you'll have to know the hostname/IP address and default port of your proxy server, and change them appropriately in the code if you happen to connect to a different proxy server.
When you are not behind the proxy server, you will have to revert the applied changes in order to return to a working state (if you want internet access inside your app).
Here it goes:
Disable proxy settings for the Python (Google App Engine Launcher) environment in some way. (In my case it was easy since I was launching the dev_appserver.py from a Terminal shell (on Linux), and the unset http_proxy and unset https_proxy commands did the trick.)
Edit {App Engine SDK root}/google/appengine/api/urlfetch_stub.py. Find the code block
if _CONNECTION_SUPPORTS_TIMEOUT:
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class(host)
(lines 376-379 in my case) and replace it with:
if _CONNECTION_SUPPORTS_TIMEOUT:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here, timeout=deadline)
else:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here)
replacing the placeholders your_proxy_host_goes_here and your_proxy_port_number_goes_here with appropriate values.
(I believe this code can be written more elegantly, though... any Python geeks out there? :) )
In my case, I also had to delete the existing compiled file urlfetch_stub.pyc (located in the same directory as urlfetch_stub.py) because the SDK didn't seem to pick up the changes until I did so.
Now you can use dev_appserver to launch your app, and use urlfetch-backed services within the app, free from HTTP 403 errors.
I'm building an application in Python2.6 that needs to get data from CouchDb. I'm using CouchDB-0.8-py2.6 to connect to the database.
I'm using this code:
import couchdb
server = couchdb.Server(url='http://localhost:5984/', full_commit=True, session=None)
db = server['databaseName']
doc = db['docId']
value = doc['value']
print(value)
On my local machine (OSX) the code runs perfectly, but when I'm trying to run it on a Debian server, I get the following error:
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 165, in __getitem__
db.resource.head() # actually make a request to the database
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 977, in head
return self._request('HEAD', path, headers=headers, **params)
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 1010, in _request
resp, data = _make_request()
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 1005, in _make_request
body=body, headers=headers)
File "/usr/local/lib/python2.6/dist-packages/httplib2-0.6.0-py2.6.egg/httplib2/__init__.py", line 1025, in request
cached_value = self.cache.get(cachekey)
AttributeError: 'bool' object has no attribute 'get'
I've tried to Google this numerous times and no-one seems to have the same error. Does anyone have an idea what I'm doing wrong here?
You're using a different version of CouchDB on the server - CouchDB-0.7dev_r199. CouchDB does not use httplib2 anymore, so if you get your development and server environments roughly the same the problem is quite likely to disappear.