I would like to use pdb to debug a view in Django but so far i've been unsuccessful, getting a BdbQuit error:
The view i've tried this on is a simple get request:
def get_file_names(request):
pdb.set_trace()
my_files = Files.objects.filter(user_id=request.user))
name_list += list(map(lambda x: (x.id, x.name, x.description),
my_files))
return JsonResponse({'rows': name_list})
A couple notes:
I prefer not to use Django pdb since this forces me to modify the client's request parameters.
I also do not want to call my code from pdb (since this code is being
called from the client).
Django Version 1.10.6
The app is running inside a docker container
Does anyone have a solution which works? Im finding that debugging complex web requests in python can be very tedious and it would be really amazing if pdb worked.
Note this is not a subprocess, just a simple get request (eventually i would like it to work on a more complex request but i've posted a simple example since this already fails).
Any suggestions? Suggestions here dont seem to work.
In order to run pdb inside a Django app running inside a container, you must run with the -it flags.
docker run -it .... djangoimage
If you're running detached (-d), you can attach to your container docker attach $IDCONTAINER.
If you're running with docker-compose:
services:
django:
# ...
stdin_open: true
tty: true
And then use the docker attach to attach to the Django container when you run the pdb.
https://docs.docker.com/engine/reference/commandline/attach/
https://docs.docker.com/engine/reference/run/
https://docs.docker.com/compose/compose-file/#domainname-hostname-ipc-mac_address-privileged-read_only-shm_size-stdin_open-tty-user-working_dir
Related
I am running succesfully a django app that is hosted inside a docker container. I change something on my code on purpose in order for my code to break. What I need is somehow to see the log of the running code as if I was running this locally on my computer. For example I forgot to import a library and when I run this locally I get a message on the terminal like "ModuleNotFoundError: No module named 'somemodule'". But when i run the same code from inside the container I get no log, just the container fails to start.
My question is: How can I get a log for my script from inside the container, so I can debug my code?
So, what I wanted to do was to somehow debug/run my own python code that was inside a container in order to see the log of my code.
I managed to do it using VSC and Remote SSH and Remote - Containers extensions.
Remote SSH
Remote - Containers
If the containers are hosted locally on your PC, you dont need the Remote - SSH extension
I have a pretty simple setup. I'm running Pycharm 2018.2.3 and using docker compose to spin up 3 containers.
My Django application
NGINX to serve static
Postgres DB
I've configured the remote interpreter for debugging the container, and break point work just fine in most cases, at least when I hit my API endpoints or some other action to the django application.
What does not work, is when I run one of my manage custom manage.py custom commands. I've tried this 2 ways so far.
I setup another debug configuration in PyCharm to execute the command. This results in another container spinning up (in place of the original. Running the command, without breaking on any line breaks. Then the whole container shuts down.
I've logged into the container, run the manage.py command directly via the command line, and it execute in the container, but again no breakpoints.
The documentation seems to work in the normal case, but I can't find any help for debugging these commands in the container.
Thanks for any help or tips.
In order to debug Django commands in a Docker Container you can create a new Run/Debug Configuration with following setup:
Use a Python configuration template
Script path: absolut location of manage.py
Parameters: the Django command you want to debug/execute
!important! Python interpreter: Docker Compose interpreter
Just an update in case anybody comes across a similar problem. My personal solution was to not use the manage.py commands, but instead make these same commands available via an http call.
I found that it was easier (and often even more useful) to simply have an endpoint like myserver.com/api/do-admin-function and restrict that to administrative access.
When I put a breakpoint in my code, even running in the container, it breaks just fine as expected and allows me to debug the way I'd like
It can depends on your docker-compose.yml exact content.
See for instance the section "An interactive debugger inside a running container!" of the article "A Simple Recipe for Django Development In Docker (Bonus: Testing with Selenium)" from Adam King.
His docker-compose.yml includes:
version: "2"
services:
django:
container_name: django_server
build:
context: .
dockerfile: Dockerfile
image: docker_tutorial_django
stdin_open: true
tty: true
volumes:
- .:/var/www/myproject
ports:
- "8000:8000"
In it, see:
stdin_open: true
tty: true
[Those 2 lines] are important, because they let us run an interactive terminal.
Hit ctrl-c to kill the server running in your terminal, and then bring it up in the background with docker-compose up -d.
docker ps tells us it’s still running:
We need to attach to that running container, in order to see its server output and pdb breakpoints.
The command docker attach django_server will present you with a blank line, but if you refresh your web browser, you’ll see the server output.
Drop import pdb; pdb.set_trace() in your code and you’ll get the interactive debugger, just like you’re used to.
So I am using the micro-servicing python package nameko which runs a service using eventlet and calls eventlet.monkey_patch() on import.
I have deciphered that it is this piece of code that is blocking any debug attempts via ipdb. The ipdb console shows in the terminal but I cannot type anything and have to close the entire terminal session in order to quit the process.
The stuck console looks like:
How can I use ipdb with this function?
EDIT: This issue only seems to happen when within a docker container.
Sorry, no convenient solution, for now your best option is to skip docker when using ipdb (you can extract filesystem image from docker and run it in another virtualisation, such as qemu, Virtualbox, systemd-nspawn). See https://github.com/larsks/undocker for help.
Other things to try (may not work, please share results):
update eventlet to github master
pip install https://github.com/eventlet/eventlet/archive/master.zip
This issue is cross posted here https://github.com/eventlet/eventlet/issues/361
I prefer pudb for python debugging. I am building python applications that run inside docker container.
Does any one know how to make pudb available inside docker container?
Thank you
You need to have pudb installed on the Docker container (this may be done adding this line to the Dockerfile: RUN pip install pudb).
You need to have the ports where you will connect to pudb open. E.g.
For a Dockerfile: add EXPOSE 6900.
For docker-compose the syntax is different:
ports:
- "6900:6900"
You need to add a line to set_trace where you want the entry point to be in the Python code. E.g.
from pudb.remote import set_trace; set_trace(term_size=(160, 40), host='0.0.0.0', port=6900)
When the code is running and reaches that point, you can connect into it with a telnet client and use pudb as you normally would to debug. In the case above, from another terminal type telnet 127.0.0.1 6900.
You can find a repository with a full working example here: https://github.com/isaacbernat/docker-pudb
To debug a bug I'm seeing on Heroku but not on my local machine, I'm trying to do step-through debugging.
The typical import pdb; pdb.set_trace() approach doesn't work with Heroku since you don't have access to a console connected to your app, but apparently you can use rpdb, a "remote" version of pdb.
So I've installed rpdb, added import rpdb; rpdb.set_trace() at the appropriate spot. When I make a request that hits the rpdb line, the app hangs as expected and I see the following in my heroku log:
pdb is running on 3d0c9fdd-c18a-4cc2-8466-da6671a72cbc:4444
Ok, so how to connect to the pdb that is running? I've tried heroku run nc 3d0c9fdd-c18a-4cc2-8466-da6671a72cbc 4444 to try to connect to the named host from within heroku's system, but that just immediately exits with status 1 and no error message.
So my specific question is: how do I now connect to this remote pdb?
The general related question is: is this even the right way for this sort of interactive debugging of an app running on Heroku? Is there a better way?
NOTE RE CELERY: Note, I've now also tried a similar approach with Celery, to no avail. The default host celery's rdb (remote pdb wrapper) uses is localhost, which you can't get to when it's Heroku. I've tried using the CELERY_RDB_HOST environment variable to the domain of the website that is being hosted on Heroku, but that gives a "Cannot assign requested address" error. So it's the same basic issue -- how to connect to the remote pdb instance that's running on Heroku?
In answer to your second question, I do it differently depending on the type of error (browser-side, backend, or view). For backend and view testing (unittests), will something like this work for you?
$ heroku run --app=your-app "python manage.py shell --settings=settings.production"
Then debug-away within ipython:
>>> %run -d script_to_run_unittests.py
Even if you aren't running a django app you could just run the debugger as a command line option to ipython so that any python errors will drop you to the debugger:
$ heroku run --app=your-app "ipython --pdb"
Front-end testing is a whole different ballgame where you should look into tools like selenium. I think there's also a "salad" test suite module that makes front end tests easier to write. Writing a test that breaks is the first step in debugging (or so I'm told ;).
If the bug looks simple, you can always do the old "print and run" with something like
import logging
logger = logging.getLogger(__file__)
logger.warn('here be bugs')`
and review your log files with getsentry.com or an equivalent monitoring tool or just:
heroku logs --tail