How to solve error while using Pyomo in Flask web server? - python

I try to built web UI for solving optimization problem by using Flask as web framework, Pyomo as optimization library and CBC as optimization engine. The error appear when I call solver while running web server.
If I run only optimization task, I get no error. It seems like the problem occur when using with Flask web server.
The error occur when Flask call this line solver = pyomo.SolverFactory('cbc', executable='CBC_PATH')
Error when running web server:
File "C:\Users\siwapolt\Envs\venv\lib\site-packages\pyomo\opt\base\solvers.py", line 582, in solve
_status = self._apply_solver()
File "C:\Users\siwapolt\Envs\venv\lib\site-packages\pyomo\opt\solver\shellcmd.py", line 244, in _apply_solver
self._rc, self._log = self._execute_command(self._command)
File "C:\Users\siwapolt\Envs\venv\lib\site-packages\pyomo\opt\solver\shellcmd.py", line 308, in _execute_command
define_signal_handlers = self._define_signal_handlers
File "C:\Users\siwapolt\Envs\venv\lib\site-packages\pyutilib\subprocess\processmngr.py", line 545, in run_command
= signal.signal(signal.SIGINT, handler)
File "c:\users\siwapolt\appdata\local\continuum\anaconda3\Lib\signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread

Yes, as long as you have PyUtilib 5.6.3, you have this fix. That said, signal handlers are still on by default. If you want to turn it off, you need to:
import pyutilib.subprocess.GlobalData
pyutilib.subprocess.GlobalData.DEFINE_SIGNAL_HANDLERS_DEFAULT = False
References: https://github.com/PyUtilib/pyutilib/issues/31#issuecomment-382479024

Related

python motor asyncio can't start new thread

I use asyncio to write to mongo, using motor library.
When I have few bulk_writes, it works with no problem.
However, when I have many write in the same time, I get an exception RuntimeError: can't start new thread.
File "/usr/local/lib/python3.7/site-packages/motor/metaprogramming.py", line 77, in method **unwrapped_kwargs)
File "/usr/local/lib/python3.7/site-packages/motor/frameworks/asyncio/__init__.py", line 74 in run_on_executor
_EXECUTOR, functools.partial(fn, *args, **kwargs))
File "uvloop/loop.pyx", line 2702, in uvloop.loop.Loop.run_in_exector
File "/usr/local/lib/python3.7/concurrent/features/thread.py", line 160, in submit
self._adjust_thread_count()
File "/usr/local/lib/python3.7/concurrent/features/thread.py", line 181, in _adjust_thread_count
t.start()
File "usr/local/lib/python3.7/threading.py", line 847, in start
_start_new_thread(self._bootsrap, ())
RuntimeError: can't start new thread
I tried to change maxPoolSize, but it didn't work.
Important facts:
In my local computer, it works with no errors. However, in Openshift I have this problems.
In Openshift I run my code via gunicorn via gunicorn app:app --worker-class uvicorn.workers.UvicornWorker
In Openshift, when I have only one worker, it works. But with 2+ worker I have this problem.
I don't open many connection of AsyncIOMontorClient, I have only two at a time.
With pymongo with almost the same code, I have no error, but there is no asyncio support in pymongo.
Without the part of mongo, my code works with no problems.
Solved.
There is a limit of 1024 threads per openshift pod.

How to deploy flow remotly on Prefect server?

I started working with the Prefect Orchestration tool.
My goal is to set up a server managing my automation on different other PCs and servers.
I do not fully understand the architecture of Prefect yet (with all these Agents etc.) but I managed to start a server on a remote Ubuntu environment.
To access the UI remotely I created a config.toml and added following lines:
[server]
endpoint = "<IPofserver>:4200/graphql"
[server.ui]
apollo_url = "http://<IPofserver>:4200/graphql"
[telemetry]
[server.telemetry]
enabled = false
The telemetry part is just to disable sending analysis data to Prefect.
Afterswards it was possible to accesss the UI from another PC and also to start an Agent on another PC with:
prefect agent local start --api "http://<IPofserver>:4200/graphql"
But how can I deploy flows now? A do not find an option to set their api like for the agent.
Even if I try to register a flow on the machine where the server itself is runnig I get following error message:
Traceback (most recent call last): File "", line 1, in
File
"/usr/local/lib/python3.10/dist-packages/prefect/core/flow.py", line
1726, in register
registered_flow = client.register( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 831, in register
project = self.graphql(query_project).data.project # type: ignore File
"/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 443, in graphql
result = self.post( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 398, in post
response = self._request( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 633, in _request
response = self._send_request( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 497, in _send_request
response = session.post( File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
635, in post
return self.request("POST", url, data=data, json=json, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py",
line 587, in request
resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
695, in send
adapter = self.get_adapter(url=request.url) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}") requests.exceptions.InvalidSchema: No connection adapters
were found for ':4200/graphql'
Used Example Code:
import prefect
from prefect import task, Flow
#task
def say_hello():
logger = prefect.context.get("logger")
logger.info("Hello, Cloud!")
with Flow("hello-flow") as flow:
say_hello()
# Register the flow under the "tutorial" project
flow.register(project_name="Test")
If you are getting started with Prefect, I'd recommend using Prefect 2.0 - check this documentation page on getting started and this one about the underlying architecture.
If you still need help with Prefect Server and Prefect 1.0, check this extensive troubleshooting guide and if that doesn't help, send us a message on Slack, and we'll try to help you there.

Python-SocketIO on Windows: An operation was attempted on something that is not a socket

I'm in the process of writing a minimal websocket server with Python 3. I am using flask, socketio, and eventlet per the instructions on the latest docs. The problem is that when the webpage with the socket connection is reloaded, the server throws the following exception:
Traceback (most recent call last):
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\greenpool.py", line 88, in _spawn_n_impl
func(*args, **kwargs)
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\wsgi.py", line 734, in process_request
proto.__init__(sock, address, self)
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\socketserver.py", line 686, in __init__
self.finish()
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\wsgi.py", line 651, in finish
greenio.shutdown_safe(self.connection)
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\greenio\base.py", line 479, in shutdown_safe
return sock.shutdown(socket.SHUT_RDWR)
OSError: [WinError 10038] An operation was attempted on something that is not a socket
I took a look at the source, and it seems like shutdown_safe is supposed to just catch any exceptions while shutting down a connection. In short, it seems like the author of this part of the library didn't foresee Windows throwing an OSError on shutdown.
Although this is a benign issue, I was wondering if there are any existing fixes/tweaks, and if not, whether I should submit this to the python-socketio GitHub issues list.

ImageKit async error - can't decode message body

I'm using Django 1.6 and Django-ImageKit 3.2.1.
I'm trying to generate images asynchronously with ImageKit. Async image generation works locally but not on the production server.
I'm using Celery and I've tried both:
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Async'
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Celery'
Using the Simple backend (synchronous) instead of Async or Celery works fine on the production server. So I don't understand why the asynchronous backend gives me the following ImportError (pulled from the Celery log):
[2014-04-05 21:51:26,325: CRITICAL/MainProcess] Can't decode message body: DecodeError(ImportError('No module named s3utils',),) [type:u'application/x-python-serialize' encoding:u'binary' headers:{}]
body: '\x80\x02}q\x01(U\x07expiresq\x02NU\x03utcq\x03\x88U\x04argsq\x04cimagekit.cachefiles.backends\nCelery\nq\x05)\x81q\x06}bcimagekit.cachefiles\nImageCacheFile\nq\x07)\x81q\x08}q\t(U\x11cachefile_backendq\nh\x06U\x12ca$
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/messaging.py", line 585, in _receive_callback
decoded = None if on_m else message.decode()
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/message.py", line 142, in decode
self.content_encoding, accept=self.accept)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 59, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 55, in _reraise_errors
yield
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 64, in pickle_loads
return load(BytesIO(s))
DecodeError: No module named s3utils
s3utils is what defines my AWS S3 bucket paths. I'll post it if need be, but the strange thing I think is that the synchronous backend has no problem importing s3utils while the asynchronous does... and asynchronous does ONLY on the production server, not locally.
I'd be SO greatful for any help debugging this. I've been wrestling this for days. I'm still learning Django and python so I'm hoping this is a stupid mistake on my part. My Google-fu has failed me.
As I hinted at in my comment above, this kind of thing is usually caused by forgetting to restart the worker.
It's a common gotcha with Celery. The workers are a separate process from your web server so they have their own versions of your code loaded. And just like with your web server, if you make a change to your code, you need to reload so it sees the change. The web server talks to your worker not by directly running code, but by passing serialized messages via the broker, which will say something like "call the function do_something()". Then the worker will read that message and—and here's the tricky part—call its version of do_something(). So even if you restart your webserver (so that it has a new version of your code), if you forget to reload the worker (which is what actually calls the function), the old version of the function will be called. In other words, you need to restart the worker any time you make a change to your tasks.
You might want to check out the autoreload option for development. It could save you some headaches.

Python Paste using Bottle framework Broken Pipe Error

I am using Bottle framework implementing the WSGI Request and response and because of the single thread issue, I changed the server into PythonWSGIServer and tested with Apache bench but the result consist of error broken pipe which is similar with this question How to prevent errno 32 broken pipe?.
I have tried the answer but to no avail.
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/paste/httpserver.py", line 1068, in process_request_in_thread
self.finish_request(request, client_address)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 641, in __init__
self.finish()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 694, in finish
self.wfile.flush()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
The server code is shown below, and I have no idea how to improve the connection, using thread pool?
from paste import httpserver
#route('/')
def index():
connection = pymongo.MongoClient(connectionString)
db = connection.test
collection = db.test
return str(collection.find_one())
application = default_app()
httpserver.serve(application, host='127.0.0.1', port=8082)
The problem is due to WSGIServer is a synchronous server, and it is not applicable for high concurrent users sending requests at the same time. In order to bypass these fallbacks, there are a lot of third-party frameworks can be used. Popular among them are Gevent greenlet libraries, Tornado, and CherryPy. All of them are based on Event-driven and asynchronous methodologies, enabling them to handle multiple concurrent users.

Categories

Resources