I faced with one annoying (while not critical) problem when I tried to combine Tornado and pyzmq ioloops as described in the pyzmq official documentation.
I have a process running tornado (T) server which accepts REST API requests from clients (C) and proxies them though ZMQ transport to another process (Z) that does real work.
C <-> T <-> Z
If C closes the connection before Z replies to T, Z (tornado) outputs long bunch of exception traces (see at the bottom). Imagine the following example:
import tornado.ioloop
from tornado.web import Application, RequestHandler, asynchronous
from zmq.eventloop import ioloop
import time
def time_consuming_task():
time.sleep(5)
class TestHandler(RequestHandler):
def get(self, arg):
print "Test arg", arg
time_consuming_task()
print "Ok, time to reply"
self.write("Reply")
if __name__ == "__main__":
app = tornado.web.Application(
[
(r"/test/([0-9]+)", TestHandler)
])
ioloop.install()
app.listen(8080)
tornado.ioloop.IOLoop.instance().start()
This example doesn't actually talk to any ZMQ peer, it just attaches pyzmq ioloop to tornado's ioloop. Though, it's enough to illustrate the problem.
From console one run server:
% python example.py
From console two run client and interrupt it before server replies (i.e. during 5 secs):
% curl -is http://localhost:8080/test/1
^C
The output of server is:
Test arg 1
Ok, time to reply
WARNING:root:Read error on 24: [Errno 54] Connection reset by peer
ERROR:root:Uncaught exception GET /test/1 (::1)
HTTPRequest(protocol='http', host='localhost:8080', method='GET', uri='/test/1', version='HTTP/1.1', remote_ip='::1', body='', headers={'Host': 'localhost:8080', 'Accept': '*/*', 'User-Agent': 'curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5'})
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1023, in _execute
self.finish()
File "/Library/Python/2.7/site-packages/tornado/web.py", line 701, in finish
self.request.finish()
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 433, in finish
self.connection.finish()
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 187, in finish
self._finish_request()
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 223, in _finish_request
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 153, in read_until
self._try_inline_read()
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 386, in _try_inline_read
if self._read_to_buffer() == 0:
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 421, in _read_to_buffer
chunk = self._read_from_socket()
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 402, in _read_from_socket
chunk = self.socket.recv(self.read_chunk_size)
error: [Errno 54] Connection reset by peer
ERROR:root:Cannot send error response after headers written
ERROR:root:Uncaught exception, closing connection.
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 304, in wrapper
callback(*args)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 262, in _on_headers
self.request_callback(self._request)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1412, in __call__
handler._execute(transforms, *args, **kwargs)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1025, in _execute
self._handle_request_exception(e)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1065, in _handle_request_exception
self.send_error(500, exc_info=sys.exc_info())
File "/Library/Python/2.7/site-packages/tornado/web.py", line 720, in send_error
self.finish()
File "/Library/Python/2.7/site-packages/tornado/web.py", line 700, in finish
self.flush(include_footers=True)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 660, in flush
self.request.write(headers + chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 429, in write
self.connection.write(chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 177, in write
assert self._request, "Request closed"
AssertionError: Request closed
ERROR:root:Exception in callback
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pyzmq-2.2.0-py2.7-macosx-10.7-intel.egg/zmq/eventloop/ioloop.py", line 434, in _run_callback
callback()
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 304, in wrapper
callback(*args)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 262, in _on_headers
self.request_callback(self._request)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1412, in __call__
handler._execute(transforms, *args, **kwargs)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1025, in _execute
self._handle_request_exception(e)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1065, in _handle_request_exception
self.send_error(500, exc_info=sys.exc_info())
File "/Library/Python/2.7/site-packages/tornado/web.py", line 720, in send_error
self.finish()
File "/Library/Python/2.7/site-packages/tornado/web.py", line 700, in finish
self.flush(include_footers=True)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 660, in flush
self.request.write(headers + chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 429, in write
self.connection.write(chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 177, in write
assert self._request, "Request closed"
AssertionError: Request closed
NOTE: It seems it's pyzmq related problem because disappears after excluding pyzmq ioloop.
Server doesn't die, it can be used by other clients, so the problem is not critical. Though, it's very annoying to find these huge confusing traces in logfiles.
So, are there any well-known methods to solve this problem?
Thanks.
It's not ZMQ problem. Request can be closed by other than timeout reasons. ZMQ only problem here is that they are raising AssertionError, which is common, instead of more specific exception.
If you are sure, that you don't want to have these exceptions in log files - do something like this:
try:
time_consuming_task()
except AssertionError as e:
if e.message == 'Request closed':
logging.info('Bad, annoying client, came to us again!')
else:
raise e
Related
I try to connect to IB
ib.connect(host,port , clientId=3, readonly=readonly)
this internally uses asnycio.
When I am doing it from a python file ib_test, it works well.
But when I try to do
from multiprocessing import Process
p=Process(target=main)
p.start()
p.join()
(the main is the main of the same file - the only thing that is get called), it doesn't work.It gets Timeout. What is strange is that in wireshark it seems that the server doesn't send the next packet, but maybe the client doesn't do recieve (although I'd expect it to appear anyway).
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\xxxx\tttt\ibtest.py", line 56, in main
mediator=IBMediator()
File "C:\Users\xxxx\tttt\ibtest.py", line 22, in __init__
self._ibsource : IBSource=IBSource(host='127.0.0.1',port=PORT,clientId=IBMediator.clientId)
File "c:\users\xxxx\zzzz\ibsource.py", line 22, in __init__
self.ib.connect(host,port , clientId=3, readonly=readonly)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\site-packages\ib_insync-0.9.70-py3.9.egg\ib_insync\ib.py", line 269, in connect
return self._run(self.connectAsync(
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\site-packages\ib_insync-0.9.70-py3.9.egg\ib_insync\ib.py", line 308, in _run
return util.run(*awaitables, timeout=self.RequestTimeout)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\site-packages\ib_insync-0.9.70-py3.9.egg\ib_insync\util.py", line 332, in run
result = loop.run_until_complete(task)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\site-packages\ib_insync-0.9.70-py3.9.egg\ib_insync\ib.py", line 1658, in connectAsync
await self.client.connectAsync(host, port, clientId, timeout)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\site-packages\ib_insync-0.9.70-py3.9.egg\ib_insync\client.py", line 216, in connectAsync
await asyncio.wait_for(self.apiStart, timeout)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 494, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
Do you know what can be the cause?
Of course, also directly calling main won't work.
Before running it , I run BackgroundScheduler, which seems like the only plausible reason .
Adding asyncio event loop didn't work as well.
I'm trying to use Apscheduler with a postgresql db via an asyncpg connection. I thought it would working, because asyncpg supports sqlalchemy ref. But yeah, it isn't working. And to make it even worst, I don't understand the error message, so I have not even a guess what to google for.
import asyncio
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
def simple_job():
print('This was an easy job!')
scheduler = AsyncIOScheduler()
jobstore = SQLAlchemyJobStore(url='postgresql+asyncpg://user:password#localhost:5432/public')
scheduler.add_jobstore(jobstore)
# schedule a simple job
scheduler.add_job(simple_job, 'cron', second='15', id='heartbeat',
coalesce=True, misfire_grace_time=5, replace_existing=True)
scheduler.start()
Versions:
python 3.7
APScheduler==3.7.0
asyncpg==0.22.0
SQLAlchemy==1.4.3
Error Message and traceback:
Traceback (most recent call last):
File "C:/Users/d/PycharmProjects/teamutils/utils/automation.py", line 320, in <module>
scheduler.start()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\asyncio.py", line 45, in start
super(AsyncIOScheduler, self).start(paused)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\base.py", line 163, in start
store.start(self, alias)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 68, in start
self.jobs_t.create(self.engine, True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\sql\schema.py", line 940, in create
bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2979, in _run_ddl_visitor
with self.begin() as conn:
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2895, in begin
conn = self.connect(close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3067, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 91, in __init__
else engine.raw_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3146, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3113, in _wrap_pool_connect
return fn()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 301, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 755, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 419, in checkout
rec = pool._do_get()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
self._dec_overflow()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 142, in _do_get
return self._create_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 247, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 362, in __init__
self.__connect(first_connect_check=True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 605, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 599, in __connect
connection = pool._invoke_creator(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\default.py", line 548, in connect
return self.dbapi.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 744, in connect
await_only(self.asyncpg.connect(*arg, **kw)),
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 48, in await_only
"greenlet_spawn has not been called; can't call await_() here. "
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: http://sqlalche.me/e/14/xd2s)
sys:1: RuntimeWarning: coroutine 'connect' was never awaited
I looked up the provided link, but not getting smart of it. So it would be nice, if somebody can tell me what is going on, so I can search for a solution by my own. (a solution would okay too, of course xD)
Sorry for this "open" question, but my understanding is so bad, that I dont know what to ask for.
I think problem is in ApScheduler.
What is happening is that scheduler.start() will attempt to create the job table in your database. But since your database url is specified as +asyncpg and there is no async coroutine running (ie: async def) when ApScheduler tries to create the table. Hence the "coroutine 'connect' was never awaited" error.
After reading the ApScheduler code, I think "integrates with asyncio" is a little misleading - specifically the scheduler can run asyncio, but the JobStore itself has no provision for an asyncio database connection.
You can get it working by removing +asyncpg in the connection url used with ApScheduler.
Note it would still be possible to use async db calls within job functions with a separate asyncpg connection.
I have a series of test over webpage. I use Webdriver for this and I try to detect a moment, when browser (Firefox) is forced to quit in gui for example. When it happens, I get a very long and ugly traceback.
Main program executes the test suite in separate thread. For example this code:
def urlopen(self, url):
''' Opens browser driver and redirects it to specified url addresss. It web driver
is not initialized, it tries to initialize it at first.
'''
# check webdriver initialization, if broken or not initialized, can be fixed
try:
self.redirectToBlank(self.driver);
except (urllib.error.URLError, AttributeError): # User closed web driver or is None
try:
self.initDriver()
except:
raise
# !! this is the moment, when I close the browser window
# if there is a problem with URL loading, it cannot be reapaired
try:
self._driver.get(url);
except:
print("Webdriver crashed or was forced to quit!", file=sys.stderr)
this method for opening the browser. initDriver method initialize self._driver, which is an instance of webdriver.Firefox
Exception in thread Thread-2:
Traceback (most recent call last):
File "c:\Users\david\workspace\tester\sdi\testing.py", line 165, in urlopen
self._driver.get(url);
File "c:\Python33\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 176, in get
self.execute(Command.GET, {'url': url})
File "c:\Python33\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 162, in execute
response = self.command_executor.execute(driver_command, params)
File "c:\Python33\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 355, in execute
return self._request(url, method=command_info[0], data=data)
File "c:\Python33\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 402, in _request
response = opener.open(request)
File "c:\Python33\lib\urllib\request.py", line 469, in open
response = self._open(req, data)
File "c:\Python33\lib\urllib\request.py", line 487, in _open
'_open', req)
File "c:\Python33\lib\urllib\request.py", line 447, in _call_chain
result = func(*args)
File "c:\Python33\lib\urllib\request.py", line 1268, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "c:\Python33\lib\urllib\request.py", line 1253, in do_open
r = h.getresponse()
File "c:\Python33\lib\http\client.py", line 1143, in getresponse
response.begin()
File "c:\Python33\lib\http\client.py", line 354, in begin
version, status, reason = self._read_status()
File "c:\Python33\lib\http\client.py", line 324, in _read_status
raise BadStatusLine(line)
http.client.BadStatusLine: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Python33\lib\threading.py", line 637, in _bootstrap_inner
self.run()
File "c:\Python33\lib\threading.py", line 594, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\david\workspace\tester\sdi\testing.py", line 361, in runFromStart
self._run()
File "c:\Users\david\workspace\tester\sdi\testing.py", line 369, in _run
self.units[0]() # run self.test_openBrowser()
File "c:\Users\david\workspace\tester\sdi\testing.py", line 418, in test_openBrowser
result = self.webtester.urlopen(self.url)
File "c:\Users\david\workspace\tester\sdi\testing.py", line 168, in urlopen
log.warn("Webdriver crashed or was forced to quit!", file=sys.stderr)
File "c:\Python33\lib\logging\__init__.py", line 1778, in warn
warning(msg, *args, **kwargs)
File "c:\Python33\lib\logging\__init__.py", line 1773, in warning
root.warning(msg, *args, **kwargs)
File "c:\Python33\lib\logging\__init__.py", line 1244, in warning
self._log(WARNING, msg, args, **kwargs)
TypeError: _log() got an unexpected keyword argument 'file'
I don't quite follow, why the try-except doesn't catch any exception, which is thrown. I think the first exception is relevant, but if you need code of methods mentioned in the second part of traceback, I'll add it.
Thank you for any advice!
1st traceback:
File "c:\Users\david\workspace\tester\sdi\testing.py", line 165, in urlopen
self._driver.get(url);
2nd traceback:
File "c:\Users\david\workspace\tester\sdi\testing.py", line 361, in runFromStart
self._run()
...
File "c:\Users\david\workspace\tester\sdi\testing.py", line 168, in urlopen
log.warn("Webdriver crashed or was forced to quit!", file=sys.stderr)
your try-except has a problem with the logging statement.
however I don't think you can catch an exception from another thread, you would have to catch it in its own thread and e.g. use message queues to notify the main thread.
I had a create a REST API using bottle framework, which receives calls from GAE. Once this REST API is invoked, it did some calculations and sent outputs as a zip file to AMAZON S3 server and return the link to GAE. Everything works fine expect the timeout issue. I tried to adjust the deadline of urlfetch to 60 seconds, which did not solve the problem. I appreciate any suggestions.
GAE side:
response = urlfetch.fetch(url=url, payload=data, method=urlfetch.POST, headers=http_headers, deadline=60)
Broser error info.:
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1102, in __call__
return handler.dispatch()
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "D:\Dropbox\ubertool_src\przm5/przm5_output.py", line 22, in post
przm5_obj = przm5_rest_model.przm5(args)
File "D:\Dropbox\ubertool_src\przm5\przm5_rest_model.py", line 351, in __init__
self.convertSoil1, self.convert1to3, self.convert2to3)
File "D:\Dropbox\ubertool_src\przm5\przm5_rest_model.py", line 135, in get_jid
response = urlfetch.fetch(url=url, payload=data, method=urlfetch.POST, headers=http_headers)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\urlfetch.py", line 270, in fetch
return rpc.get_result()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 612, in get_result
return self.__get_result_hook(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\urlfetch.py", line 410, in _get_fetch_result
'Deadline exceeded while waiting for HTTP response from URL: ' + url)
DeadlineExceededError: Deadline exceeded while waiting for HTTP response from URL: http://localhost:7777/my_model
REST server:
#route('/my_model', method='POST')
#auth_basic(check)
def my_model():
#run the model
run_my model()
#zip output files
zout=zipfile.ZipFile("test.zip","w")
for name in os.listdir(src1):
zout.write(name)
zout.close()
##upload file to S3
conn = S3Connection(key, secretkey)
bucket = Bucket(conn, 'przm5')
k=Key(bucket)
name1='PRZM5_'+name_temp+'.zip'
k.key=name1
###All the above steps are fine####
k.set_contents_from_filename('test.zip')
link='https://s3.amazonaws.com/'+my_file_path
return {'ff': ff}
run(host='localhost', port=7777, debug=True)
Errors from the REST server:
127.0.0.1 - - [07/Jan/2014 16:16:36] "POST /my_model HTTP/1.1" 200 1663
Traceback (most recent call last):
File "C:\Python27\Lib\wsgiref\handlers.py", line 86, in run
self.finish_response()
File "C:\Python27\Lib\wsgiref\handlers.py", line 128, in finish_response
self.write(data)
File "C:\Python27\Lib\wsgiref\handlers.py", line 212, in write
self.send_headers()
File "C:\Python27\Lib\wsgiref\handlers.py", line 270, in send_headers
self.send_preamble()
File "C:\Python27\Lib\wsgiref\handlers.py", line 194, in send_preamble
'Date: %s\r\n' % format_date_time(time.time())
File "C:\Python27\Lib\socket.py", line 324, in write
self.flush()
File "C:\Python27\Lib\socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 10053] An established connection was aborted by the software in your host machine
127.0.0.1 - - [07/Jan/2014 16:16:36] "POST /my_model HTTP/1.1" 500 59
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 50953)
Traceback (most recent call last):
File "C:\Python27\Lib\SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Python27\Lib\SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "C:\Python27\Lib\SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Python27\Lib\SocketServer.py", line 651, in __init__
self.finish()
File "C:\Python27\Lib\SocketServer.py", line 710, in finish
self.wfile.close()
File "C:\Python27\Lib\socket.py", line 279, in close
self.flush()
File "C:\Python27\Lib\socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 10053] An established connection was aborted by the software in your host machine
The deadline is a maximum value, once reached it'll fail. And it's failing with
Deadline exceeded while waiting for HTTP response
So you should try to catch that exception and try again.
If the entire operation can't be done in under 60 seconds then there is nothing else to be done, it's a hard limit in GAE that HTTP requests can't exceed 60 seconds.
Here is the code:
conn = httplib.HTTPConnection("127.0.0.1:8000")
conn.request("POST", "/api/job/", some_params, headers)
conn.close()
no problem with sending request to server
but if i use loop for example:
for i in range(n):
conn = httplib.HTTPConnection("127.0.0.1:8000")
conn.request("POST", "/api/job/", some_params, headers)
conn.close()
it rises an exception, but it is interesting that request is successfull:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 284, in run
self.finish_response()
File "/usr/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 324, in finish_response
self.write(data)
File "/usr/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 403, in write
self.send_headers()
File "/usr/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 467, in send_headers
self.send_preamble()
File "/usr/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 385, in send_preamble
'Date: %s\r\n' % http_date()
File "/usr/lib/python2.7/socket.py", line 324, in write
self.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 60438)
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 310, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 570, in __init__
BaseHTTPRequestHandler.__init__(self, *args, **kwargs)
File "/usr/lib/python2.7/SocketServer.py", line 641, in __init__
self.finish()
File "/usr/lib/python2.7/SocketServer.py", line 694, in finish
self.wfile.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
any suggestions ???
Looks to me like your buffer is getting filled. Your buffer will fill with any network requests you make, then is cleared when the server acknowledges receipt of the data. Not sure if there isn't a better way to do this, but you could try giving the server some time to acknowledge receipt by doing a short time.pause within your loop.