Using GAE remote api for debugging from localhost - Connecting too late? - python

Trying to use Google App Engine's remote_api so that we can do line-by-line debugging through the IDE.
The remote api works great at first. The application is able to successfully retrieve information from the database. The error occurs when wepapp responds to the client browser.
The Code:
It is very similar to the example given in app engine's documentation:
from model import My_Entity
from google.appengine.ext.remote_api import remote_api_stub
# Test database calls
def get(w_self):
remote_api_stub.ConfigureRemoteApi(None, '/_ah/remote_api', auth_func, 'myapp.appspot.com')
t_entity = My_Entity.get_by_key_name('the_key')
w_self.response.set_status(200)
# The error occurs AFTER this code executes, when webapp actually responds to the browser
Error Traceback:
The error seems to be related to the blobstore.
Is the remote api initialized too late into the code?
...After webapp has done something with the blobstore through the localhost server? Then the remote api might be re-directing requests to the blobstore in the server instead of the localhost debug server where webapp is expecting it to be?
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2795, in _HandleRequest
login_url)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3622, in CreateImplicitMatcher
get_blob_storage)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_blobstore.py", line 420, in CreateUploadDispatcher
return UploadDispatcher()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_blobstore.py", line 307, in __init__
get_blob_storage())
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_blobstore.py", line 79, in GetBlobStorage
return apiproxy_stub_map.apiproxy.GetStub('blobstore').storage
AttributeError: 'RemoteStub' object has no attribute 'storage'
Should the remote api be initialized somewhere else in the code?
Or does this problem have to do with something else?
Thanks so much!

To get this working you can use the help of the testbed to start the stubs that are missing:
ADDRESS=....
remote_api_stub.ConfigureRemoteApi(None, '/_ah/remote_api', auth_func, ADDRESS)
# First, create an instance of the Testbed class.
myTestBed = testbed.Testbed()
# Then activate the testbed, which prepares the service stubs for use.
myTestBed.activate()
# Next, declare which service stubs you want to use.
myTestBed.init_blobstore_stub()
myTestBed.init_logservice_stub()

Related

Connect from a cisco device to http server on debian

I'm trying to communicate with a http server which is running on debian strech from a brand new out of the box cisco device. Now, the so called zero touch configuration is no problem:
The Switch gets an IP address and such via DHCP and a link to where to fetch it's initial configuration.
The switch gets its basic configuration such as user credentials etc.
The problem rises when I try to search through a database on the server from the switch. In this database some variables are stored. Depending the serialnumber of the switch, it should receive a specific hostname, Mgmt address etc.
On those new switches there is a python module integrated so I ran some tests. I tried to fetch the serial number and got them whithout any problems. The moment I tried to write the serial number on a txt file on the server I got this error
Traceback (most recent call last): File "", line 1, in
IOError: [Errno 2] No such file or directory:
'http://10.232.152.19:80/temp.txt'
Code so far:
from cli import cli
def get_serial():
serial = cli("show version | include System Serial\n")
serial = (serial.split()[-1])
f = open ("http://10.232.152.19:80/temp.txt", "a")
f.write(serial)
f.close
get_serial()
The problem you are facing is because you are trying to open a file from the network. You need to download the file first in you system and then open it. You should use urllib to fetch the file and then open it. then save it and again push it back.
import urllib
txt = urllib.urlopen(target_url).read()

Couchbase opening bucket just after create fail

I'm creating some python scripts for automatic testing some Couchbase operations.
There is something unexpected while executing this code:
for i in range(0, BUCKETS_AMOUNT): # BUCKETS_AMOUNT = 4
bucket_name = '%s%s' % (BUCKET_NAME_PREFIX, i) # BUCKET_NAME_PREFIX = 'test_bck_'
print('Creating bucket: %s' % bucket_name)
admin.bucket_create(bucket_name, ram_quota=512, replicas=1)
print('Opening bucket: %s' % bucket_name)
bucket = cluster.open_bucket(bucket_name)
print('Bucket: %s' % bucket)
inserted_data[bucket_name] = _fill_bucket(bucket)
<Key='/pools/default/buckets/test_bck_1', RC=0x3B[HTTP Operation failed. Inspect status code for details], HTTP Request failed. Examine 'objextra' for full result, Results=1, C Source=(src/http.c,144), OBJ=HttpResult<rc=0x0, value=b'Requested resource not found.\r\n', http_status=404, url=/pools/default/buckets/test_bck_1, tracing_context=0, tracing_output=None>, Tracing Output={"/pools/default/buckets/test_bck_1": null}>
Creating bucket: test_bck_0
Opening bucket: test_bck_0
E
======================================================================
ERROR: test_backup (__main__.TestBackup)
----------------------------------------------------------------------
Traceback (most recent call last):
File "couchbase_backup_test.py", line 29, in test_backup
expected = create_and_fill_test_buckets(self.cluster, self.admin)
File "/u01/app/couchbase/bucket_data_util.py", line 41, in create_and_fill_test_buckets
bucket = cluster.open_bucket(bucket_name)
File "/u01/app/couchbase/env_cb/lib/python3.6/site-packages/couchbase/cluster.py", line 144, in open_bucket
rv = self.bucket_class(str(connstr), **kwargs)
File "/u01/app/couchbase/env_cb/lib/python3.6/site-packages/couchbase/bucket.py", line 273, in __init__
self._do_ctor_connect()
File "/u01/app/couchbase/env_cb/lib/python3.6/site-packages/couchbase/bucket.py", line 282, in _do_ctor_connect
self._connect()
couchbase.exceptions._ProtocolError_0x16 (generated, catch ProtocolError): <RC=0x16[Data received on socket was not in the expected format], There was a problem while trying to send/receive your request over the network. This may be a result of a bad network or a misconfigured client or server, C Source=(src/bucket.c,1066)>
----------------------------------------------------------------------
In this example bucket test_bck_0 is created and filled, but it seems like trying to open test_bck_1 before even creating it.
When I'm executing this code remotely - everything works perfectly. But I need to run this locally from actual node.
There is slight version difference, but I have no possibility to align that.
Couchbase server version: 5.1
It works remotely from:
OS: Windows 7 x64
Python: 3.4.4
couchbase: 2.3.5
Does not work from:
OS: Red Hat Enterprise 7.5
Python: 3.6.3
couchbase: 2.4.0
Also, the problem is that creating a bucket is an asynchronous action so there needs to be a delay between issuing the create bucket request and opening the bucket.
Adding something like this between creating and opening the bucket will help:
import time
time.sleep(5);
You probably aren't seeing this happen when running your script against a remote cluster because it's likely a dedicated cluster with more resources (CPU / RAM), plus network latency will add a little.
You can use couchbase-cli bucket-create to create buckets and many more operations than SDK API exposes.

Executing sparql query from python to virtuoso server in linux?

I am having problem for running the following program (sparql_test.py). I am running it from Linux machine. I am installing Virtuoso server in the same Linux machine. In the Linux server, I don't have sudo permission nor browser access. But, I can execute SPARQL query from isql prompt (SQL>) successfully.
Program: sparql_test.py
from SPARQLWrapper import SPARQLWrapper, JSON
sparql = SPARQLWrapper("http://localhost:8890/sparql")
sparql.setQuery("select ?s where { ?s a <http://ehrofip.com/data/Admissions>.} limit 10")
sparql.setReturnFormat(JSON)
result = sparql.query().convert()
for res in result["results"]["bindings"]:
print(res)
I got the following error:
[suresh#deodar complex2vec]$ python sparql_test.py
Traceback (most recent call last):
File "sparql1.py", line 14, in "<module>"
result = sparql.query().convert()
File "/home/suresh/.local/lib/python2.7/site-packages/SPARQLWrapper/Wrapper.py", line 687, in query
return QueryResult(self._query())
File "/home/suresh/.local/lib/python2.7/site-packages/SPARQLWrapper/Wrapper.py", line 667, in _query
raise e
urllib2.HTTPError: HTTP Error 502: Bad Gateway
However, the above program run smoothly in my own laptop. What might be the problem? Is this issue of connection?
Thank you
Best,
Suresh
I do not believe this error is raised by Virtuoso. I believe it is raised by SPARQLWrapper.
It looks like there's something between the outside world (which includes the Linux machine itself) and the Virtuoso listener on port 8890. The "Bad Gateway" suggests there may be two things -- a reverse proxy, and a firewall.
Port 8890 (set as [HttpServer]:Listen in the INI file) must be open to communications, direct or proxied, for SPARQL access to work.
iSQL talks to port 1111 (set as [Parameters]:Listen in the INI file), which apparently doesn't have a similar block/proxy.

How do I configure my uWsgi server to protect against the Unreadable Post Error?

This is the problem:
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/six.py", line 535, in next
return type(self).__next__(self)
File "/app/.heroku/python/lib/python2.7/site-packages/django/http/multipartparser.py", line 344, in __next__
output = next(self._producer)
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/six.py", line 535, in next
return type(self).__next__(self)
File "/app/.heroku/python/lib/python2.7/site-packages/django/http/multipartparser.py", line 406, in __next__
data = self.flo.read(self.chunk_size)
File "/app/.heroku/python/lib/python2.7/site-packages/django/http/request.py", line 267, in read
six.reraise(UnreadablePostError, UnreadablePostError(*e.args), sys.exc_info()[2])
File "/app/.heroku/python/lib/python2.7/site-packages/django/http/request.py", line 265, in read
return self._stream.read(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 59, in read
result = self.buffer + self._read_limited(size - len(self.buffer))
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 47, in _read_limited
result = self.stream.read(size)
UnreadablePostError: error during read(65536) on wsgi.input
My current configuration reads like this:
[uwsgi]
http-socket = :$(PORT)
master = true
processes = 4
die-on-term = true
module = app.wsgi:application
memory-report = true
chunked-input-limit = 25000000
chunked-input-timeout = 300
socket-timeout = 300
Python: 2.7.x | uWsgi: 2.0.10
And to make the problem even more specific, this is happening when I process images synchronously along with an image upload. I know that ideally I must do this using Celery, but because of a business requirement I am not able to do that. So need to configure the timeout in such a way that it allows me to accept a large image file, process it and then return response.
Any kind of light on the question will be extremely helpful. Thank you.
The error quoted in the description isn't the full picture; the relevant part is this lot entry:
[uwsgi-body-read] Error reading 65536 bytes … message: Client closed connection uwsgi_response_write_body_do() TIMEOUT
This specific error is being raised because (most probably) the client, or something between it and uWSGI, aborted the request.
There are a number of possible causes for this:
A buggy client
Network-level filtering (DPI or some misconfigured firewall)
Bugs / misconfiguration in the server in front of uWSGI
The last one is covered in the uWSGI docs:
If you plan to put uWSGI behind a proxy/router be sure it supports chunked input requests (or generally raw HTTP requests).
To verify your issue really isn't in uWSGI, try to upload the file via the console on the server hosting your uWSGI application. Hit the HTTP endpoint directly, bypassing nginx/haproxy and friends.

Connection refused to Twitter API on PythonAnywhere

I am trying to connect to the twitter streaming API on Python anywhere, but always get a connection refused error.
I use Tweepy in my application, and to test the connection I am using the streaming example that can be found in the repo.
HEre is a sum-up of the code :
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
# Go to http://dev.twitter.com and create an app.
# The consumer key and secret will be generated for you after
consumer_key=""
consumer_secret=""
# After the step above, you will be redirected to your app's page.
# Create an access token under the the "Your access token" section
access_token=""
access_token_secret=""
class StdOutListener(StreamListener):
""" A listener handles tweets are the received from the stream.
This is a basic listener that just prints received tweets to stdout.
"""
def on_data(self, data):
print data
return True
def on_error(self, status):
print status
if __name__ == '__main__':
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
stream.filter(track=['basketball'])
When I run this line in a bash console in python anywhere (after having filled the tokens of course)
12:02 ~/tweepy/examples (master)$ python streaming.py
I get the following error :
Traceback (most recent call last):
File "streaming.py", line 33, in <module>
stream.filter(track=['basketball'])
File "/usr/local/lib/python2.7/site-packages/tweepy/streaming.py", line 228, in filter
self._start(async)
File "/usr/local/lib/python2.7/site-packages/tweepy/streaming.py", line 172, in _start
self._run()
File "/usr/local/lib/python2.7/site-packages/tweepy/streaming.py", line 106, in _run
conn.connect()
File "/usr/local/lib/python2.7/httplib.py", line 1157, in connect
self.timeout, self.source_address)
File "/usr/local/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused
The domain .twitter.com is in the pythonanywhere whithelist though, so i don't understand why the connection would be refused :s.
The very same code works like a charm on my Ubuntu.
Any idea would be more than welcome, thanks !!
If you're using a Free account, tweepy won't work. It does not use the proxy settings from the environment.
There is a fork of tweepy that you might be able to use (http://github.com/ducu/tweepy) until the main line uses the proxy settings correctly.
As Glenn said, there currently isn't proxy support in tweepy.
For a reason I cannot explain (and isn't documented), a pull request was closed without any merge about a month ago.
https://github.com/tweepy/tweepy/pull/152
There apparently is a fork available on github (see Glenn's answer), but I didn't test it.
Knowing that I would need to use my own domain name in the end, I finally got a paid account on pythonanywhere and got rid of the proxy stuff all together.

Categories

Resources