I have successfully created a webhook in stackstorm and it is visible in the webhook lists.
[centos#ip- ~]$ sudo st2 webhook list
+------------+------------------+-------------+
| url | type | description |
+------------+------------------+-------------+
| wfcreation | core.st2.webhook | |
+------------+------------------+-------------+
[centos#ip- ~]$
I triggered the webhook giving a payload and setting proper headers using stackstorm api key. The webhook gets triggered and returns with a status code of 200. But the underlying stackstorm workflow fails giving the below error.
{
"traceback": " File \"/opt/stackstorm/st2/lib/python2.7/site-packages/st2actions/container/base.py\", line 119, in _do_run
(status, result, context) = runner.run(action_params)
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/retrying.py\", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/retrying.py\", line 206, in call
return attempt.get(self._wrap_exception)
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/retrying.py\", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/retrying.py\", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File \"/opt/stackstorm/runners/mistral_v2/mistral_v2/mistral_v2.py\", line 247, in run
result = self.start_workflow(action_parameters=action_parameters)
File \"/opt/stackstorm/runners/mistral_v2/mistral_v2/mistral_v2.py\", line 284, in start_workflow
**options)
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/mistralclient/api/v2/executions.py\", line 65, in create
return self._create('/executions', data)
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/mistralclient/api/base.py\", line 100, in _create
self._raise_api_exception(resp)
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/mistralclient/api/base.py\", line 160, in _raise_api_exception
error_message=error_data)
",
"error": "AccessRefused: 403"
}
The official stakstorm documentation does not have any reference of troubleshooting this error.
Any help would be greatly appreciated as I am blocked on this right now.
Finally I was able to figure out the problem was with the mistral-server service running on the stackstorm host.
The problem was that the mistral-server service was not able to connect to rabbitmq service due to a misconfiguration during stackstorm installation.But the misleading error message "AccessRefused: 403" did not pin point towards an issue with rabbitmq connection issue.
So here is what I found in the logs.
The error message in the mistral logs (/var/log/mistral/):
2018-06-25 15:30:19.309 10767 ERROR oslo_service.service AccessRefused: (0,0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.
On digging the rabbitmq logs (/var/log/rabbitmq/):
=ERROR REPORT==== 25-Jun-2018::16:34:23 ===
closing AMQP connection <0.5118.0> (127.0.0.1:41248 -> 127.0.0.1:5672):
{handshake_error,starting,0,
{amqp_error,access_refused,
"AMQPLAIN login refused: user 'st2' - invalid credentials",
'connection.start_ok'}}
Clearly the st2 user credentials were wrongly configured during installation and that led to this whole issue.
Hope this helps somebody in future.
Another thing that could cause this is not having NGinx configured, and in that case the system wants you to use local ports. Here's an example that might help you: https://github.com/StackStorm/st2/tree/master/conf/st2.conf.sample and here's the nginx config.
Related
I started working with the Prefect Orchestration tool.
My goal is to set up a server managing my automation on different other PCs and servers.
I do not fully understand the architecture of Prefect yet (with all these Agents etc.) but I managed to start a server on a remote Ubuntu environment.
To access the UI remotely I created a config.toml and added following lines:
[server]
endpoint = "<IPofserver>:4200/graphql"
[server.ui]
apollo_url = "http://<IPofserver>:4200/graphql"
[telemetry]
[server.telemetry]
enabled = false
The telemetry part is just to disable sending analysis data to Prefect.
Afterswards it was possible to accesss the UI from another PC and also to start an Agent on another PC with:
prefect agent local start --api "http://<IPofserver>:4200/graphql"
But how can I deploy flows now? A do not find an option to set their api like for the agent.
Even if I try to register a flow on the machine where the server itself is runnig I get following error message:
Traceback (most recent call last): File "", line 1, in
File
"/usr/local/lib/python3.10/dist-packages/prefect/core/flow.py", line
1726, in register
registered_flow = client.register( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 831, in register
project = self.graphql(query_project).data.project # type: ignore File
"/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 443, in graphql
result = self.post( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 398, in post
response = self._request( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 633, in _request
response = self._send_request( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 497, in _send_request
response = session.post( File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
635, in post
return self.request("POST", url, data=data, json=json, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py",
line 587, in request
resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
695, in send
adapter = self.get_adapter(url=request.url) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}") requests.exceptions.InvalidSchema: No connection adapters
were found for ':4200/graphql'
Used Example Code:
import prefect
from prefect import task, Flow
#task
def say_hello():
logger = prefect.context.get("logger")
logger.info("Hello, Cloud!")
with Flow("hello-flow") as flow:
say_hello()
# Register the flow under the "tutorial" project
flow.register(project_name="Test")
If you are getting started with Prefect, I'd recommend using Prefect 2.0 - check this documentation page on getting started and this one about the underlying architecture.
If you still need help with Prefect Server and Prefect 1.0, check this extensive troubleshooting guide and if that doesn't help, send us a message on Slack, and we'll try to help you there.
I'm trying to setup a GRPC client in Python to hit a particular server. The server is setup to require authentication via access token. Therefore, my implementation looks like this:
def create_connection(target, access_token):
credentials = composite_channel_credentials(
ssl_channel_credentials(),
access_token_call_credentials(access_token))
target = target if target else DEFAULT_ENDPOINT
return secure_channel(target = target, credentials = credentials)
conn = create_connection(svc = "myservice", session = Session(client_id = id, client_secret = secret)
stub = FakeStub(conn)
stub.CreateObject(CreateObjectRequest())
The issue I'm having is that, when I attempt to use this connection I get the following error:
File "<stdin>", line 1, in <module>
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 216, in __call__
response, ignored_call = self._with_call(request,
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call
return call.result(), call
File "anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result
raise self
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 266, in with_call
return self._with_call(request,
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call
return call.result(), call
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result
raise self
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 957, in with_call
return _end_unary_response_blocking(state, call, True, None)
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{
"created":"#1633399048.828000000",
"description":"Failed to pick subchannel",
"file":"src/core/ext/filters/client_channel/client_channel.cc",
"file_line":3159,
"referenced_errors":[
{
"created":"#1633399048.828000000",
"description":
"failed to connect to all addresses",
"file":"src/core/lib/transport/error_utils.cc",
"file_line":147,
"grpc_status":14
}
]
}"
I looked up the status code associated with this response and it seems that the server is unavailable. So, I tried waiting for the connection to be ready:
channel_ready_future(conn).result()
but this hangs. What am I doing wrong here?
UPDATE 1
I converted the code to use the async connection instead of the synchronous connection but the issue still persists. Also, I saw that this question had also been posted on SO but none of the solutions presented there fixed the problem I'm having.
UPDATE 2
I assumed that this issue was occurring because the client couldn't find the TLS certificate issued by the server so I added the following code:
def _get_cert(target: str) -> bytes:
split_around_port = target.split(":")
data = ssl.get_server_certificate((split_around_port[0], split_around_port[1]))
return str.encode(data)
and then changed ssl_channel_credentials() to ssl_channel_credentials(_get_cert(target)). However, this also hasn't fixed the problem.
The issue here was actually fairly deep. First, I turned on tracing and set GRPC log-level to debug and then found this line:
D1006 12:01:33.694000000 9032 src/core/lib/security/transport/security_handshaker.cc:182] Security handshake failed: {"created":"#1633489293.693000000","description":"Cannot check peer: missing selected ALPN property.","file":"src/core/lib/security/security_connector/ssl_utils.cc","file_line":160}
This lead me to this GitHub issue, which stated that the issue was with grpcio not inserting the h2 protocol into requests, which would cause ALPN-enabled servers to return that specific error. Some further digging led me to this issue, and since the server I connected to also uses Envoy, it was just a matter of modifying the envoy deployment file so that:
clusters:
- name: my-server
connect_timeout: 10s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: python-server
port_value: 1337
tls_context:
common_tls_context:
tls_certificates:
alpn_protocols: ["h2"] <====== Add this.
I am trying to use MSI example provided in below link :
https://learn.microsoft.com/en-us/python/azure/python-sdk-azure-authenticate?view=azure-python#mgmt-auth-msi
To do that, I created a linux VM , installed MSI extension on it and running above code in a python application and when I run that python application I get the following error:
[azureuser#vish-redhat ~]$ python msi-auth.py
No handlers could be found for logger "msrestazure.azure_active_directory"
Traceback (most recent call last):
File "msi-auth.py", line 10, in <module>
subscription = next(subscription_client.subscriptions.list())
File "/usr/lib/python2.7/site-packages/msrest/paging.py", line 121, in __next__
self.advance_page()
File "/usr/lib/python2.7/site-packages/msrest/paging.py", line 107, in advance_page
self._response = self._get_next(self.next_link)
File "/usr/lib/python2.7/site-packages/azure/mgmt/resource/subscriptions/v2016_06_01/operations/subscriptions_operations.py", line 207, in internal_paging
request, header_parameters, **operation_config)
File "/usr/lib/python2.7/site-packages/msrest/service_client.py", line 191, in send
session = self.creds.signed_session()
File "/usr/lib/python2.7/site-packages/msrestazure/azure_active_directory.py", line 685, in signed_session
self.set_token()
File "/usr/lib/python2.7/site-packages/msrestazure/azure_active_directory.py", line 681, in set_token
self.scheme, _, self.token = get_msi_token(self.resource, self.port, self.msi_conf)
File "/usr/lib/python2.7/site-packages/msrestazure/azure_active_directory.py", line 590, in get_msi_token
result = requests.post(request_uri, data=payload, headers={'Metadata': 'true'})
File "/usr/lib/python2.7/site-packages/requests/api.py", line 108, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', error(111, 'Connection refused'))
[azureuser#vish-redhat ~]$
Code:
from msrestazure.azure_active_directory import MSIAuthentication
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
# Create MSI Authentication
credentials = MSIAuthentication()
# Create a Subscription Client
subscription_client = SubscriptionClient(credentials)
subscription = next(subscription_client.subscriptions.list())
subscription_id = subscription.subscription_id
# Create a Resource Management client
resource_client = ResourceManagementClient(credentials, subscription_id)
# List resource groups as an example. The only limit is what role and policy are assigned to this MSI token.
for resource_group in resource_client.resource_groups.list():
print(resource_group.name)
You need install Python SDK in your Linux VM. Please refer to this official document.
pip install azure
Also, you need give Owner role for your VM on subscription level.
More information about this please refer to this link.
Now, you could use this code to test on VM. I test in my lab, it works for me.
Note: You need modify resource_client = ResourceManagementClient(credentials, subscription_id) to resource_client = ResourceManagementClient(credentials, str(subscription_id)), it requires a string type.
A connection error is usually because the extension is not yet available. You can try if the extension is available using the CLI with az login --msi
https://learn.microsoft.com/en-us/azure/active-directory/managed-service-identity/how-to-use-vm-sign-in
If it works, your VM is created correctly with MSI support. It it doesn't, probably your extension is not configured correctly.
Note that we changed the way to get a token with MSI from inside a VM. We now use IMDS:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service
Starting with the next release of the CLI (the first one of April 2018), CLI will authenticate with IMDS directly and not use the VM extension anymore. This is already shipped in the underlying library msrestazure in its 0.4.25 version. This one will bypass completely your VM extension to use IMDS and is the prefered scenario now. Could you try with this version of msrestazure? If it works with 0.4.25 but not in 0.4.24, this likely means your VM extension is not installed correctly, but you don't care since it's a deprecated scenario :)
Note that in order to get a token, your VM doesn't need any special permissions or ownership of subscription. However, for this token to be useful you need it :). But since your error is related to the "get a token" part and not permission, I would just kindly suggest that you might need this complementary info for later if you have permissions issues:
https://learn.microsoft.com/en-us/azure/active-directory/managed-service-identity/howto-assign-access-cli
(full disclosure, I work at MS in the SDK/CLI team and wrote the MSI support)
I'm hitting an error with code that connects to AWS using boto3. The error just started yesterday afternoon, and between the last time I didn't get the error and the first time I got the error I don't see anything that changed.
The error is:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL:
In .aws/config I have:
$ cat ~/.aws/config
[default]
region=us-east-1
Here's what I know:
Using the same AWS credentials and config on another machine, I don't see the error.
Using different AWS credentials and config on the same machine, I do see the error.
I'm the only one in our group that has this issue for any credentials on any machine.
I don't think I changed anything that would affect this between the last time this worked and the first time it didn't. It seems like I'd have had to change some AWS specific configuration on my side or some low level libraries, and I didn't make any such change. I was talking with a colleague for 30-45 minutes and when I returned and picked up where I left off the issue first appeared.
Any thoughts or ideas on troubleshooting this?
Full exception dump follows.
$ python
Python 2.7.10 (default, Jul 14 2015, 19:46:27)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
>>> boto3.client('ec2').describe_regions()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/botocore/client.py", line 200, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/client.py", line 244, in _make_api_call
operation_model, request_dict)
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 173, in make_request
return self._send_request(request_dict, operation_model)
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 203, in _send_request
success_response, exception):
File "/Library/Python/2.7/site-packages/botocore/endpoint.py", line 267, in _needs_retry
caught_exception=caught_exception)
File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 250, in __call__
caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 273, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 313, in __call__
caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 222, in __call__
return self._check_caught_exception(attempt_number, caught_exception)
File "/Library/Python/2.7/site-packages/botocore/retryhandler.py", line 355, in _check_caught_exception
raise caught_exception
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1.amazonaws.com/"
Issue resolved. It turns out that a couple of seemingly unrelated actions independent of anything boto related resulted in HTTP_PROXY and HTTPS_PROXY environment variables being improperly set, which was then breaking the botocore calls under both boto3 and the aws cli. Removing both environment variables resolved the problem.
I'll leave this up as I found it very difficult to find anything pointing to this as a possible cause of this error. Might save someone else some of the hair pulling I went through.
I just had a similar issue. All of a sudden, no connection possible anymore to my s3 through boto3 on django while I had still the possibility to do the actions on my Heroku environment.
Appeared I recently installed the amazon CLI where my configuration was different and the CLI overrules the environment variables... Damn. took me 3 hours to find.
through aws configure I now set
AWS Access Key ID [****************MPIA]: "your true key here without quotes"
AWS Secret Access Key [****************7DWm]: "your true secret access key here without quotes"
Default region name [eu-west-1]: "your true region here without quotes"
Default output format [None]: [here i did just an enter in order not to change this]
Just posting this for the sake of anyone having this issue.
I came across same error when my connection went down - botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-2.amazonaws.com/"
When connection is restored then it worked without any issue. Probable reason for this error could be
Connection error
Region is not available to cater your request as every request hits endpoint on AWS (more detail can be found on https://docs.aws.amazon.com/general/latest/gr/rande.html#billing-pricing )
It seems like Boto3 has matured enough to throw exception for more proper reason of failure to precisely know what is going.
Also if you have any issue related to your config then most of them are encapsulated with ClientError exception.
After updating from 1.7.5 (where everything worked fine) I'm getting a HTTP Error 403: Forbidden when trying to open any sites via localhost. Strange thing is I have pretty much the same setup at home as here at work and everything works there... Might be an issue with proxy server we're using at work, since that's the only difference I can think of? Here's the error log I'm getting, so if anyone knows what's going on please help (;
Traceback (most recent call last):
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 1302, in communicate
req.respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\wsgi_server.py", line 246, in __call__
return app(environ, start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 89, in __call__
self._flush_logs(response.get('logs', []))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 220, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 320, in MakeSyncCall
rpc.CheckSuccess()
File "U:\Dev\GAE\google\appengine\api\apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "U:\Dev\GAE\google\appengine\tools\appengine_rpc.py", line 393, in Send
f = self.opener.open(req)
File "U:\Dev\Python\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "U:\Dev\Python\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "U:\Dev\Python\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "U:\Dev\Python\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "U:\Dev\Python\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
INFO 2013-04-19 12:28:52,576 server.py:561] default: "GET / HTTP/1.1" 500 -
INFO 2013-04-19 12:28:52,619 server.py:561] default: "GET /favicon.ico HTTP/1.1" 304 -
Also, the launcher throws an error when closing:
Traceback (most recent call last):
File "launcher\mainframe.pyc", line 327, in OnStop
File "launcher\taskcontroller.pyc", line 167, in Stop
File "launcher\dev_appserver_task_thread.pyc", line 82, in stop
File "launcher\taskthread.pyc", line 107, in stop
File "launcher\platform.pyc", line 397, in KillProcess
pywintypes.error: (5, 'TerminateProcess', 'Access is denied.')
I had this very same issue with my MacOSX when using a proxy server using Google App Engine Launcher 1.8.6. Apparently there's an issue with "proxy_bypass" on "urllib2.py".
There are two possible solutions:
Downgrade to 1.7.5, but, who wants to downgrade?
Edit "[GAE Instalattion path]/google/appengine/tools/appengine_rpc.py" and look for the line that says
opener.add_handler(fancy_urllib.FancyProxyHandler())
In my computer it was line 578, and then put a hash (#) at the beginning of the line, like this:
`#opener.add_handler(fancy_urllib.FancyProxyHandler())`
Save the file, stop and then restart your application. Now dev_appserver.py shouldn't try to use any proxy server at all.
If your application uses any external resources like a SOAP Webservice or something like that and you can't reach the server without the proxy server, then you'll have to downgrade. Please keep in mind that external javascript files (like facebook SDK or similar) are loaded from your browser, not from your application.
Since I'm not using any external REST or SOAP services it worked for me!
Hopefully it will work for you as well.
Try either:
-Accessing it through a different proxy. I.E a . proxy within a proxy
-Accessing it through your local IP i.e 192.168.1.1
I faced the same issue with version 1.9.5. Seems that the API proxy is sending some RPCs to the proxy server, which are then being rejected with HTTP 403 (since proxy servers are generally configured to reject connection attempts to arbitrary ports). In my case I was using the urlfetch module in my app to access external web pages, so disabling the proxy server was not a choice for me.
This is how I worked around the issue some time back (most probably it was based on comments found under this issue, but I cannot remember the exact sources).
NOTE:
For this approach to work, you'll have to know the hostname/IP address and default port of your proxy server, and change them appropriately in the code if you happen to connect to a different proxy server.
When you are not behind the proxy server, you will have to revert the applied changes in order to return to a working state (if you want internet access inside your app).
Here it goes:
Disable proxy settings for the Python (Google App Engine Launcher) environment in some way. (In my case it was easy since I was launching the dev_appserver.py from a Terminal shell (on Linux), and the unset http_proxy and unset https_proxy commands did the trick.)
Edit {App Engine SDK root}/google/appengine/api/urlfetch_stub.py. Find the code block
if _CONNECTION_SUPPORTS_TIMEOUT:
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class(host)
(lines 376-379 in my case) and replace it with:
if _CONNECTION_SUPPORTS_TIMEOUT:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here, timeout=deadline)
else:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here)
replacing the placeholders your_proxy_host_goes_here and your_proxy_port_number_goes_here with appropriate values.
(I believe this code can be written more elegantly, though... any Python geeks out there? :) )
In my case, I also had to delete the existing compiled file urlfetch_stub.pyc (located in the same directory as urlfetch_stub.py) because the SDK didn't seem to pick up the changes until I did so.
Now you can use dev_appserver to launch your app, and use urlfetch-backed services within the app, free from HTTP 403 errors.