mod_wsgi: Global statements execute only when the file is modified - python

I'm using python over mod_wsgi and I have some statements (debug messages and other things) in the global part of the script (outside the application function).
Those global statements are executed only once just after the .py file is modified (touched). If I update the webpage again those statement are not executed until the next time I edit/touch the .py file.
I guess the reason is a caching mechanism at some level (python level? wsgi level?).
Is there something I can configure or anything so that the statement in the global part of the script are always executed?

Read the mod_wsgi documentation on source code reloading.
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
In short, use daemon mode not embedded mode and touch the WSGI script file after any changes to any code and it will force a reloaded of the daemon process.

I found a solution:
MaxRequestsPerChild 1
Setting apache to process only one request per child before killing it, forces to reload the source code every time.
I don't know if this is the best way, but at least works for now.

Related

Debugging any Known Python File Ran by an Interpreter

I'm trying to debug any Python script that an interpreter runs so long and I have a reference to that script. I.e if my connected interpreter runs a script called abc.py, and in my script directory I have abc.py with breakpoints attached. The IDE will automatically stop execution at that break point
I'm using PyCharm, but I'd like to know the theory here to say if I'd ever like to connect VS Code I'd be able to do that as well. Additionally I'm currently connecting to a Docker container running airflow.
Given the above, I'm assuming that the goal is to do a "remote" debug.
Also since Python is a script, and run by an interpreter I am assuming, if I can read into the interpreter and if PyCharm can match the file ran by the interpreter then it should be able to pause the execution.
I am additionally assuming that the interpreter can run in "normal" mode. Not in debug mode as we have in Java.
I have read three approaches:
ssh interpreter to my Docker container - seems most promising for my current goal, but unsure if it'll work
using Python debug server (Debugging Airflow Tasks with IDE tools?) - still requires manual changes in the specific scripts
using Docker interpreter (https://medium.com/#andrewhharmon/apache-airflow-using-pycharm-and-docker-for-remote-debugging-b2d1edf83d9d) - still requires individual debug configs for executing a single DAG / script
Is debugging any file executed by a python interpreter possible, at least in theory?
Is it possible remotely?
Is it possible using airflow at all?

How to debug hanging python code when I can't use PDB

I have some code that I'm able to run perfectly fine on my local system. Only when I launch it to run automatically as a docker containerized program on a server, does it hang and time out at a particular line. I'm not able to step through the program with pdb in that particular environment, nor can I reproduce the behavior in an environment where I could use a debugger. I can just see in my cloudwatch logs that it's stopping on that line and timing out.
That line of code that is hanging is calling other code, but I don't know internally the exact point where it hangs. Is there any way to wrap something around this line of code that will make it print out every step that it goes through?
(For reference, the actual server is an AWS SageMaker endpoint which is not directly accessible via ssh or anything, and the line of code that is timing out is an AutoGluon model.predict() call.)
I guess what I'm looking for could potentially be something like a debug decorator, or a way to automate pdb and wrap it around a function call to see where it's hanging. Or any better suggestions.
Perhaps you could do a sagemaker local session combined with remote pdb. https://pypi.org/project/remote-pdb/ You need the local session so you can easily get inside the container and remote-pdb lets you then connect.

pycharm and flask autoreload and breakpoints not working

I'm using Pycharm 4, with flask 0.10.1, python 3.4
It seems that when running a flask application from inside pycharm, if I run it with:
app.run(debug=True)
My breakpoints are ignored. After some googling, I've found that in order to make PyCharm stop on breakpoints, I should run flask with:
app.run(debug=True, use_reloader=False)
Now PyCharm correctly stops on breakpoints, but I miss the autoreloading feature.
Is there any way to make both work together?
Using python 2.7 both things work
I reported this to PyCharm: https://youtrack.jetbrains.com/issue/PY-13976
I'm going to start with the short answer: No, what you want cannot be done with any releases of PyCharm up to 4.0.1.
The problem is that when you use the reloader the Flask application runs in a child process, so the PyCharm debugger is attached to the master process and has no control over the child.
The best way to solve this problem, in my opinion, is to ask Jetbrains to build a "restart on change" feature in their IDE. Then you don't need to use Werkzeug's reloader at all and you get the same functionality direct from PyCharm.
Until Jetbrains decides to implement this, I can share my workaround, which is not terribly bad.
In the "Edit Configurations", set the configuration you are going to use to "Single Instance only" (check box in the top right of the dialog box)
Make sure the configuration is the active one.
Configure your Flask app to not use the Werkzeug reloader.
Press Ctrl-D to start debugging (on Mac, others may have a different shortcut)
Breakpoints should work just fine because the reloader isn't active.
Make any code changes you need.
When you are ready to restart, hit Ctrl-D again. The first time you do it you will get a confirmation prompt, something like "stop and restart?". Say yes, and check the "do not show again" checkbox.
Now you can hit Ctrl-D to quickly restart the debugger whenever you need to.
I agree it is not perfect, but once the Ctrl-D gets into your muscle memory you will not even think about it.
Good luck!
I found that in PyCharm 2018.1.2 there is FLASK_DEBUG checbox in run configuration:
With this after making some changes, saving file triggers reload action.
In my setup, I'm debugging the flask app by running a main.py file which sets some configuration and calls app.run(). My python interpreter is set up in a Docker container.
My issue was that I needed to check Run with Python console.
The problem is because with use_reloader=True the werkzeug application is started in a seperate (child) thread of main application and PyCharm fails to correctly handle breakpoints because they are lost when the thread starts.
You can try to follow this thread: http://forum.jetbrains.com/thread/PyCharm-776 but it seams there was not too much progress on that.
I'd suggest using something Python-ish like pdb, i.e.:
#app.route('/<string:page>')
def main(page):
import pdb; pdb.set_trace() # This line actually stops application execution
# and starts Python debug shell in the console
# where you can examine current scope and continue
# normal code execution at any time.
# You can inject *any* code here.
# For example, if you type `print page` during pause,
# it will output content of "page" variable.
return render_template('index.html')
Try configuring this python running configuration in "Edit Configurations". After that, run in debug mode.
You need to unlock the console.
you start the app in debug mode
then you make something that causes an error.
at the end of the error message from flask is this
Here you enter the PIN flask prints in the console at the start
copy paste this pin into the console and click confirm PIN
now the breakpoints will work
from pycharm 2017 using python 2.7 (in my case with virtual env, but I suppose not necessary) I do:
run...
leave scripts and scripts parameters blank
I put in interpreter options: -m flask run
set the env variables FLASK_APP
than run attach to local process, and finally choose the running process
my use case is to connect from postman to flask rest services endpoints and interrupt on my breakpoints

jython killing parent process that spawns subprocess breaks subprocess stdout to file?

Let me start with what I'm really trying to do. We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath. We picked Jython in particular because we only need to depend on the standalone jython.jar in our startup script. We decided we could write a jython script that uses subprocess.Popen to launch our application's jvm and then terminates.
One more thing. Our application uses a lot of legacy debug code that prints to standard out. So the startup script typically has been redirecting stdout/stderr to a log file. I attempted to reproduce that with our jython script like this:
subprocess.Popen(args,stdout=logFile,stderr=logFile)
After this line the launcher script and hosting jvm for jython terminates. The problem is nothing shows up in the logFile. If I instead do this:
subprocess.Popen(args,stdout=logFile,stderr=logFile).wait()
then we get logs. So the parent process needs to run parallel to the application process launched via subprocess? I want to avoid having two running jvms.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates? Is there a better way to launch the application jvm from jython? Is Jython a bad solution anyway?
We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath.
You could use a platform independent script to generate a platform specific startup script either at installation time or before each invocation. In the latter case, additionally, you need a simple static platform specific script that invokes your platform independent startup-script-generating script and then the generated script itself. In both cases you start your application by calling a static platform specific script.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates?
You could open file/redirect in a child process e.g., using shell:
Popen(' '.join(args+['>', 'logFile', '2>&1']), # shell specific cmdline
shell=True) # on Windows see _cmdline2list to understand what is going on

running a python script on a remote computer

I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.

Categories

Resources