When I try to call Model.put() in Eclipse for a Google Python app, I get the following error:
exception value:[Error 5] Access is denied
I don't know if it's related, but this happened after I changed the parameter of --datastore_path="F:/tmp/myapp_datastore" in arguments of debug configurations.
Everything is working fine for another application from the command prompt.
However, when I use the same in Eclipse I get following dump in the console window of Eclipse:
ERROR 2009-06-11 10:19:41,312 dev_appserver.py:2906] Exception
encountered handling request
Traceback (most recent call last):
File "F:\Program Files\Google\google_appengine\google\appengine\tools
\dev_appserver.py", line 2876, in _HandleRequest
base_env_dict=env_dict)
File "F:\Program Files\Google\google_appengine\google\appengine\tools
\dev_appserver.py", line 387, in Dispatch
base_env_dict=base_env_dict)
File "F:\Program Files\Google\google_appengine\google\appengine\tools
\dev_appserver.py", line 2163, in Dispatch
self._module_dict)
File "F:\Program Files\Google\google_appengine\google\appengine\tools
\dev_appserver.py", line 2081, in ExecuteCGI
reset_modules = exec_script(handler_path, cgi_path, hook)
File "F:\Program Files\Google\google_appengine\google\appengine\tools
\dev_appserver.py", line 1979, in ExecuteOrImportScript
script_module.main()
File "F:\eclipse\workspace\checkthis\src\carpoolkaro.py", line 749,
in main
run_wsgi_app(application)
File "F:\Program Files\Google\google_appengine\google\appengine\ext
\webapp\util.py", line 76, in run_wsgi_app
result = application(env, _start_response)
File "F:\Program Files\Google\google_appengine\google\appengine\ext
\webapp\__init__.py", line 517, in __call__
handler.handle_exception(e, self.__debug)
File "F:\Program Files\Google\google_appengine\google\appengine\ext
\webapp\__init__.py", line 384, in handle_exception
self.error(500)
TypeError: 'str' object is not callable
INFO 2009-06-11 10:19:41,312 dev_appserver.py:2935] "POST /suggest
HTTP/1.1" 500 -
This is the screen dump of the application from a browser window:
F:\Program Files\Google\google_appengine\google\appengine\ext\webapp
\__init__.py in handle_exception(self=<__main__.SuggestHandler object
at 0x019C0510>, exception=WindowsError(5, 'Access is denied'),
debug_mode=True)
According to this error,
TypeError: 'str' object is not callable
I guess, you have shadowed built-in object str to something else.
For example, you used str="dummy" in your code and str became uncallable object.
Error 5 generally means the path you specified is wrong. I recommend you to remove the double quotation marks in your command:
Try:
--datastore_path=F:/tmp/myapp_datastore
and let us know if that helped you
Related
I'm trying to implement a live search through the jquery plugin Select2 in my Django 1.11.4 project. Running Python 3.6. When I type in the textbox, the server doesn't seem to be able to handle the number of requests and it closes, followed by these series of errors.
I've spent a couple hours following a couple threads on SO SO(more) , Google, etc but none have a working solution. I thought it was a python version issue, but I've upgraded from 2.7 to 3.6 and still have it. Any help would be great thanks!
This the view that's being called by the ajax call:
def directory_search(request):
# Query text from search bar
query = request.GET.get('q', None)
if query:
# Searches the DB for matching value
customers = Customer.objects.filter(
Q(name__icontains= query)|
Q(contract_account__icontains= query)|
Q(address__icontains= query)|
Q(address__icontains= query)|
Q(email__icontains= query)
).distinct()
# Turns queryset into dict
customers = customers.values("contract_account","name")
# Turns dict into list
customers_list = list(customers)
return JsonResponse(customers_list, safe=False)
else:
return JsonResponse(data={'success': False,'errors': 'No mathing items found'})
This is the js/ajax call for the Select2 plugin:
$(".customer-search-bar").select2({
ajax:{
dataType: 'json',
type: 'GET',
url:"{% url 'portal:directory_search' %}",
data: function (params) {
var queryParameters = {
q: params.term
}
return queryParameters;
},
processResults: function (data) {
return {
results: $.map(data, function (item) {
return {
text: item.name,
id: item.contract_account
}
})
};
}
},
escapeMarkup: function (markup) { return markup; },
minimumInputLength: 1,
templateResult: formatRepo,
templateSelection: formatRepoSelection,
language: { errorLoading:function(){ return "Searching..." }}
});
Errors: (I just broke them apart to make it more legible, but they occurred in the presented order)
1.
Traceback (most recent call last):
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 138, in run
self.finish_response()
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 180, in finish_response
self.write(data)
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 274, in write
self.send_headers()
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 332, in send_headers
self.send_preamble()
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 255, in send_preamble
('Date: %s\r\n' % format_date_time(time.time())).encode('iso-8859-1')
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 453, in _write
result = self.stdout.write(data)
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\socketserver.py", line 775, in write
self._sock.sendall(b)
ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
2.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 141, in run
self.handle_error()
File "C:\Users\leep\PythonStuff\virtual_environments\rpp_3.6\lib\site-packages\django\core\servers\basehttp.py", line 88, in handle_error
super(ServerHandler, self).handle_error()
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 368, in handle_error
self.finish_response()
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 180, in finish_response
self.write(data)
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 274, in write
self.send_headers()
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 331, in send_headers
if not self.origin_server or self.client_is_modern():
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 344, in client_is_modern
return self.environ['SERVER_PROTOCOL'].upper() != 'HTTP/0.9'
TypeError: 'NoneType' object is not subscriptable
3.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\socketserver.py", line 639, in process_request_thread
self.finish_request(request, client_address)
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\socketserver.py", line 361, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\socketserver.py", line 696, in __init__
self.handle()
File "C:\Users\leep\PythonStuff\virtual_environments\rpp_3.6\lib\site-packages\django\core\servers\basehttp.py", line 155, in handle
handler.run(self.server.get_app())
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\handlers.py", line 144, in run
self.close()
File "C:\Users\leep\AppData\Local\Programs\Python\Python36-32\lib\wsgiref\simple_server.py", line 35, in close
self.status.split(' ',1)[0], self.bytes_sent
AttributeError: 'NoneType' object has no attribute 'split'
It appears that its a python 2.7 known bug, maybe it remains on python 3.6:
https://bugs.python.org/issue14574
I'm having a similar issue in django, also running python 3.6
TypeError: 'NoneType' object is not subscriptable followed by AttributeError: 'NoneType' object has no attribute 'split'
Edit
It seems to be a error on chrome. It didn't come to me, 'cause I was using Opera, but opera uses chromium as well. In internet explorer the error didn't show.
https://code.djangoproject.com/ticket/21227#no1
This is the bug tracker
I am a newbie in Toil and AWS trying to run HelloWorld.py example in the Toil Document. I have already successfully installed toil and related python packages on my local mac laptop and have setup my account at AWS. I have created a small leader/worker cluster
$ cgcloud create-cluster toil -s 2 -t m3.large
and started it:
$ cgcloud ssh toil-leader
This changed my screen prompt to:
mesosbox#ip-172-31-25-135:~$
Then from an other window on my mac, I started the Toil HellowWorld example with with command:
$ python2.7 HelloWorld.py --batchSystem=mesos --mesosMaster=mesos-master:5050 aws:us-west-2:my-aws-jobstore
And I got the following output:
Apples-Air 2017-06-02 19:30:53,524 MainThread INFO toil.lib.bioio: Root logger is at level 'INFO', 'toil' logger at level 'INFO'.
Apples-Air 2017-06-02 19:30:53,524 MainThread INFO toil.lib.bioio: Root logger is at level 'INFO', 'toil' logger at level 'INFO'.
Apples-Air 2017-06-02 19:30:54,852 MainThread WARNING toil.jobStores.aws.jobStore: Exception during panic
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 209, in initialize
self.destroy()
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 1334, in destroy
self._bind(create=False, block=False)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 241, in _bind
versioning=True)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 721, in _bindBucket
bucket = self.s3.get_bucket(bucket_name, validate=True)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 502, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 535, in head_bucket
raise err
S3ResponseError: S3ResponseError: 403 Forbidden
Traceback (most recent call last):
File "helloWorld.py", line 22, in <module>
print(Job.Runner.startToil(j, options)) #Prints Hello, world!, ….
File "/usr/local/lib/python2.7/site-packages/toil/job.py", line 740, in startToil
with Toil(options) as toil:
File "/usr/local/lib/python2.7/site-packages/toil/common.py", line 614, in __enter__
jobStore.initialize(config)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 209, in initialize
self.destroy()
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 206, in initialize
self._bind(create=True)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 241, in _bind
versioning=True)
File "/usr/local/lib/python2.7/site-packages/toil/jobStores/aws/jobStore.py", line 721, in _bindBucket
bucket = self.s3.get_bucket(bucket_name, validate=True)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 502, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 535, in head_bucket
raise err
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
Please help.
Thanks.
---John
I realize that this answer is a little late. One problem I notice is with the mesosMaster argument.
Instead, your command should have look like
python2.7 HelloWorld.py --batchSystem=mesos --mesosMaster=172.31.25.135:5050 aws:us-west-2:my-aws-jobstore
Notice that I replaces mesos-master with the actual IP address from
mesosbox#ip-172-31-25-135:~$
Hopefully in the future, one will not need to pass this argument at all, however this is not yet implemented as of 26 July 2017.
Also for further problems with Toil you will probably have better luck posting a new issue to the Toil Github page.
I am trying to use gevent based server with remote datastore api and, i occasionally get
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gevent/pywsgi.py", line 884, in handle_one_response
self.run_application()
File "/home/abhinavabcd_gmail_com/samosa-scripts/samosa_messaging_framework/geventwebsocket/handler.py", line 76, in run_ap
plication
self.run_websocket()
File "/home/abhinavabcd_gmail_com/samosa-scripts/samosa_messaging_framework/geventwebsocket/handler.py", line 52, in run_we
bsocket
self.application(self.environ, lambda s, h, e=None: [])
File "server.py", line 478, in __call__
current_app.handle()
File "/home/abhinavabcd_gmail_com/samosa-scripts/samosa_messaging_framework/geventwebsocket/resource.py", line 23, in handl
e
self.protocol.on_close()
File "/home/abhinavabcd_gmail_com/samosa-scripts/samosa_messaging_framework/geventwebsocket/protocols/base.py", line 14, in
on_close
self.app.on_close(reason)
File "server.py", line 520, in on_close
current_node.destroy_connection(self.ws)
File "server.py", line 428, in destroy_connection
db.remove_connection(conn.connection_id)
File "/home/abhinavabcd_gmail_com/samosa-scripts/samosa_messaging_framework/database_ndb.py", line 141, in remove_connectio
n
key.delete()
File "/home/abhinavabcd_gmail_com/google_appengine/google/appengine/ext/ndb/model.py", line 3451, in _put
return self._put_async(**ctx_options).get_result()
File "/home/abhinavabcd_gmail_com/google_appengine/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/home/abhinavabcd_gmail_com/google_appengine/google/appengine/ext/ndb/tasklets.py", line 378, in check_success
self.wait()
File "/home/abhinavabcd_gmail_com/google_appengine/google/appengine/ext/ndb/tasklets.py", line 362, in wait
if not ev.run1():
File "/home/abhinavabcd_gmail_com/google_appengine/google/appengine/ext/ndb/eventloop.py", line 253, in run1
delay = self.run0()
File "/home/abhinavabcd_gmail_com/google_appengine/google/appengine/ext/ndb/eventloop.py", line 238, in run0
(rpc, self.rpcs))
RuntimeError: rpc <google.appengine.api.apiproxy_stub_map.UserRPC object at 0x7fb46e754a90> was not given to wait_any as a ch
oice {<google.appengine.api.apiproxy_stub_map.UserRPC object at 0x7fb46e779f90>: (<bound method Future._on_rpc_completion of
<Future 7fb46e756f10 created by run_queue(context.py:185) for tasklet _memcache_set_tasklet(context.py:1111); pending>>, (<go
ogle.appengine.api.apiproxy_stub_map.UserRPC object at 0x7fb46e779f90>, '', <google.appengine.datastore.datastore_rpc.Connect
ion object at 0x7fb46e874250>, <generator object _memcache_set_tasklet at 0x7fb46e79b9b0>), {}), <google.appengine.api.apipro
xy_stub_map.UserRPC object at 0x7fb46e75c690>: (<bound method Future._on_rpc_completion of <Future 7fb46e7c8950 created by ru
n_queue(context.py:185) for tasklet _memcache_set_tasklet(context.py:1111); pending>>, (<google.appengine.api.apiproxy_stub_m
ap.UserRPC object at 0x7fb46e75c690>, '', <google.appengine.datastore.datastore_rpc.Connection object at 0x7fb46e7c8ad0>, <ge
nerator object _memcache_set_tasklet at 0x7fb46e6f3640>), {})}
# monkey patched already
try:
import dev_appserver
dev_appserver.fix_sys_path()
except ImportError:
print('Please make sure the App Engine SDK is in your PYTHONPATH.')
raise
email = '----SERVICE_ACCOUNT_EMAIL-------'
from google.appengine.ext.remote_api import remote_api_stub remote_api_stub.ConfigureRemoteApiForOAuth(
'{}.appspot.com'.format("XXXPROJECT_IDXXX"),
'/_ah/remote_api' ,
service_account=email,
key_file_path='KEY---1927925e9abc.p12')
import config
from google.appengine.ext import ndb
from lru_cache import LRUCache
It happens occasionally , with datastore write operations , put and delete.
Any pointers to understand/fix this appreciated.
I tried to run a mrjob script on Amazon EMR. It worked well when I used instance c1.medium, however, it had an error when I changed instnace to t2.micro. The full error message was shown below.
C:\Users\Administrator\MyIpython>python word_count.py -r emr 111.txt
using configs in C:\Users\Administrator.mrjob.conf creating new
scratch bucket mrjob-875a948553aab9e8 using
s3://mrjob-875a948553aab9e8/tmp/ as our scratch dir on S3 creating tmp
directory c:\users\admini~1\appdata\local\temp\word_count.Administr
ator.20150731.013007.592000 writing master bootstrap script to
c:\users\admini~1\appdata\local\temp\word_cou
nt.Administrator.20150731.013007.592000\b.py
PLEASE NOTE: Starting in mrjob v0.5.0, protocols will be strict by
default. It's recommended you run your job with --strict-protocols or
set up mrjob.conf as de scribed at
https://pythonhosted.org/mrjob/whats-new.html#ready-for-strict-protoc
ols
creating S3 bucket 'mrjob-875a948553aab9e8' to use as scratch space
Copying non-input files into
s3://mrjob-875a948553aab9e8/tmp/word_count.Administ
rator.20150731.013007.592000/files/ Waiting 5.0s for S3 eventual
consistency Creating Elastic MapReduce job flow Traceback (most recent
call last): File "word_count.py", line 16, in
MRWordFrequencyCount.run() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\job.py", line 461, in run
mr_job.execute() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\job.py", line 479, in execute
super(MRJob, self).execute() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\launch.py", line 153, in
execute
self.run_job() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\launch.py", line 216, in
run_job
runner.run() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\runner.py", line 470, in run
self._run() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 881, in
_run
self._launch() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 886, in
_launch
self._launch_emr_job() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 1593, in
_launch_emr_job
persistent=False) File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 1327, in
_create_job_flow
self._job_name, self._opts['s3_log_uri'], **args) File "F:\Program Files\Anaconda\lib\site-packages\mrjob\retry.py", line
149, i n call_and_maybe_retry
return f(*args, **kwargs) File "F:\Program Files\Anaconda\lib\site-packages\mrjob\retry.py", line 71, in
call_and_maybe_retry
result = getattr(alternative, name)(*args, **kwargs) File "F:\Program Files\Anaconda\lib\site-packages\boto\emr\connection.py",
lin e 581, in run_jobflow
'RunJobFlow', params, RunJobFlowResponse, verb='POST') File "F:\Program Files\Anaconda\lib\site-packages\boto\connection.py", line
12 08, in get_object
raise self.ResponseError(response.status, response.reason, body) boto.exception.EmrResponseError: EmrResponseError: 400 Bad Request
Sender
ValidationError
Instance type 't2.micro' is not supported c3ee1107-3723-11e5-8d8e-f1011298229d
This is my config file detail
runners:
emr:
aws_access_key_id: xxxxxxxxxxx
aws_secret_access_key: xxxxxxxxxxxxx
aws_region: us-east-1
ec2_key_pair: EMR
ec2_key_pair_file: C:\Users\Administrator\EMR.pem
ssh_tunnel_to_job_tracker: false
ec2_instance_type: t2.micro
num_ec2_instances: 2
EMR doesn't support the t2 instance type. If you're worried about money, spot instances are a very cost-effective option: right now m1.xlarge is less than $0.05 per hour, and m1.medium is $0.01 per hour (cheaper than t2.micro anyway) Supported types are the following (screenshot from the EMR webapp console:
I am getting this error just in production. On localhost it works well.
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 701, in __call__
handler.get(*groups)
File "/base/data/home/apps/s~ordenaacoes/2.357768699674437719/controllers/mainh.py", line 74, in get
'stocks': goodStocks(),
File "/base/data/home/apps/s~ordenaacoes/2.357768699674437719/controllers/mainh.py", line 108, in goodStocks
goodStocks = memcache.get("goodStocks")
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/memcache/__init__.py", line 574, in get
results = rpc.get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/memcache/__init__.py", line 639, in __get_hook
self._do_unpickle)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/memcache/__init__.py", line 271, in _decode_value
return do_unpickle(value)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/memcache/__init__.py", line 412, in _do_unpickle
return unpickler.load()
File "/base/python_runtime/python_dist/lib/python2.5/pickle.py", line 852, in load
dispatch[key](self)
File "/base/python_runtime/python_dist/lib/python2.5/pickle.py", line 1084, in load_global
klass = self.find_class(module, name)
File "/base/python_runtime/python_dist/lib/python2.5/pickle.py", line 1119, in find_class
klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'Stock'
Stock is one class of my models. I tested with python 2.5 on localhost too.
The line that gives the error is the access to memcache (get function).
I have changed the project and maybe the type of the data I put in memcache is different. Do I have some way to clean the data on memcache?
Any idea?
As of release 1.6.4 there is a Memcache Viewer in the Admin Console. It includes a "Flush Cache" button that should do exactly what you need.
Most likely you have a pickled version of an object in memcache that doesn't match your new code. Here's an old question on flushing memcache, the answer should apply to your case:
How can I have Google App Engine clear memcache every time a site is deployed?