I prefer pudb for python debugging. I am building python applications that run inside docker container.
Does any one know how to make pudb available inside docker container?
Thank you
You need to have pudb installed on the Docker container (this may be done adding this line to the Dockerfile: RUN pip install pudb).
You need to have the ports where you will connect to pudb open. E.g.
For a Dockerfile: add EXPOSE 6900.
For docker-compose the syntax is different:
ports:
- "6900:6900"
You need to add a line to set_trace where you want the entry point to be in the Python code. E.g.
from pudb.remote import set_trace; set_trace(term_size=(160, 40), host='0.0.0.0', port=6900)
When the code is running and reaches that point, you can connect into it with a telnet client and use pudb as you normally would to debug. In the case above, from another terminal type telnet 127.0.0.1 6900.
You can find a repository with a full working example here: https://github.com/isaacbernat/docker-pudb
Related
I recently followed the following tutorial to try to debug python code in a Docker container using VSCode:
https://www.youtube.com/watch?v=qCCj7qy72Bg&t=374s
My Dockerfile looks like this:
FROM ubuntu as base
#Do standard image stuff here
#Python Debugger
From base as debugger
RUN pip3 install debugpy
ENTRYPOINT ["python3","-m","debugpy","--listen","0.0.0.0:5678","--wait-for-client"]
I have alternately tried copying the tutorial exactly and using the following ENTRYPOINT instead:
ENTRYPOINT ["python3","-m","debugpy","--listen","0.0.0.0:5678","--wait-for-client","-m"]
I have also configured a VSCode remote attach debug instance to launch.json:
{"name":"Python: Remote Attach","type":"python","request":"attach","connect":{"host":"5678","port":5678},"pathMappings":[{"localRoot":"${workspaceFolder}","remoteRoot":"."}]},
I want the debugger to either debug the current file alone in isolation, or run a file I use to run the entire project, called init.py with the debugger in the docker container.
Currently, when I build and run the docker container with
docker run -p 5678:5678 CONTAINERNAME python3 /home/init.py
It hangs and times out on the Visual Studio side.
In the video, he uses this to run the python unittest module, which is why I tried taking out the -m from the end of the command in my modified version. However, it looks like debugpy doesn't know what to do. I have tried running the docker instance before the remote debugger, or the remote debugger after the docker instance, but the error remains and the debug does not work. How can I remote debug into a docker instance using VSCode?
EDIT:
Thank you FlorianLudwig for pointing out that my original code used commas for the IP rather than the periods required.
I have edited the question to reflect this change. It removed issues where python complained about a malformed address, but it seems I am still having some sort of connection issue to the debugger.
EDIT2:
I think I figured out what caused the connection issue. It appears the visual studio default is to use the same host as the port number in question. I changed my host to 0.0.0.0 and I was able to debug by running the container then connecting to it via Visual Studio Debugging.
In your Dockerfile:
"0,0,0,0:5678" should be "0.0.0.0:5678"
To make it a valid ip address. 0.0.0.0 basically means "any" ip address.
I am building a webapp (a simple flask site) that uses docker. I want my development code to not reside within docker, but be executed by the development environment (using python3) I have defined in my dockerfile. I know that I can use the COPY . . syntax in a dockerfile to copy my source code into the image for execution, but that violates my aim of separating the container from my source. Is there a way to have a docker container read and execute the code that it is in the directory I run the docker container run command from?
Right now my container uses the copy company to copy all the source code into the container. It then uses the CMD command to automatically run the flask app:
CMD [ "python", "flask_app/server.py" ]
(I'm storing all my flask code in a directory called flask_app). I'm assuming this works because all this has been copied into the container (according to the specifications given in the dockerfile) and is being executed when I run the container. I would like for the container to instead access and execute flask_app/server.py without copying this information into itself -- is this possible? If so, how?
Instead of using COPY to move the code into the container, you'll use a "bind mount" (https://docs.docker.com/storage/bind-mounts/).
When you run the container, you'll do it with a command like this:
docker run --mount type=bind,source=<path_outside_container>,target=<path_inside_container> <image_tag>
For portability, I recommending putting this line in a script intended to be run from the repository root, and having the <path_outside_container> be "$(pwd)", so that it will work on other people's computers. You'll need to adjust <path_inside_container> and your CMD depending on where you want the code to live inside the container.
(Obviously you can also put whatever other options you'd like on the command, like --it --rm or -p <whatever>.)
I have a pretty simple setup. I'm running Pycharm 2018.2.3 and using docker compose to spin up 3 containers.
My Django application
NGINX to serve static
Postgres DB
I've configured the remote interpreter for debugging the container, and break point work just fine in most cases, at least when I hit my API endpoints or some other action to the django application.
What does not work, is when I run one of my manage custom manage.py custom commands. I've tried this 2 ways so far.
I setup another debug configuration in PyCharm to execute the command. This results in another container spinning up (in place of the original. Running the command, without breaking on any line breaks. Then the whole container shuts down.
I've logged into the container, run the manage.py command directly via the command line, and it execute in the container, but again no breakpoints.
The documentation seems to work in the normal case, but I can't find any help for debugging these commands in the container.
Thanks for any help or tips.
In order to debug Django commands in a Docker Container you can create a new Run/Debug Configuration with following setup:
Use a Python configuration template
Script path: absolut location of manage.py
Parameters: the Django command you want to debug/execute
!important! Python interpreter: Docker Compose interpreter
Just an update in case anybody comes across a similar problem. My personal solution was to not use the manage.py commands, but instead make these same commands available via an http call.
I found that it was easier (and often even more useful) to simply have an endpoint like myserver.com/api/do-admin-function and restrict that to administrative access.
When I put a breakpoint in my code, even running in the container, it breaks just fine as expected and allows me to debug the way I'd like
It can depends on your docker-compose.yml exact content.
See for instance the section "An interactive debugger inside a running container!" of the article "A Simple Recipe for Django Development In Docker (Bonus: Testing with Selenium)" from Adam King.
His docker-compose.yml includes:
version: "2"
services:
django:
container_name: django_server
build:
context: .
dockerfile: Dockerfile
image: docker_tutorial_django
stdin_open: true
tty: true
volumes:
- .:/var/www/myproject
ports:
- "8000:8000"
In it, see:
stdin_open: true
tty: true
[Those 2 lines] are important, because they let us run an interactive terminal.
Hit ctrl-c to kill the server running in your terminal, and then bring it up in the background with docker-compose up -d.
docker ps tells us it’s still running:
We need to attach to that running container, in order to see its server output and pdb breakpoints.
The command docker attach django_server will present you with a blank line, but if you refresh your web browser, you’ll see the server output.
Drop import pdb; pdb.set_trace() in your code and you’ll get the interactive debugger, just like you’re used to.
The docker plugin has a debug port for connecting to a container
I have a python application, but according to the docs the debug port is only supported for java.
How can I set breakpoints and debug my python container in intellij? Is there some way I could have the python container connect to the intellij python debugger?
Edit: I'm running Windows 10, docker for Windows, and the container is linux. Perhaps I need to manually setup some kind of remote debugging for the intellij Python debugger? Also, might as well ask, must I have the professional version for remote debugging or is there a workaround using community?
You can do that using Python Remote Debugging. Open the configurations window and click on + -> Python Remote Debug
Then you either set a port or keep it blank for Pycharm to find a available port.
Then click on Debug icon to launch the debug server which will show below kind of message
Starting debug server at port 57588
Use the following code to connect to the debugger:
import pydevd
pydevd.settrace('localhost', port=57588, stdoutToServer=True, stderrToServer=True)
Waiting for process connection...
Now you need to setup pydev debugging inside docker. You will need the pycharm-debug-py3k.egg for this. For me I copied to my current Dockerfile folder like below
cp "/Users/tarun.lalwani/Library/Application Support/IntelliJIdea2017.2/python/pycharm-debug-py3k.egg" .
The location for your will change based on the IntelliJ version installed. After that, we need to edit our Dockerfile
FROM python:3.6
WORKDIR /app
ENV PYTHONPATH=/app:/app/debug
COPY pycharm-debug-py3k.egg /app/debug
COPY debug_test.py /app/
CMD python debug_test.py
The debug_test.py when built will have below lines at the top
import pydevd
pydevd.settrace('docker.for.mac.localhost', port=55507, stdoutToServer=True, stderrToServer=True)
Note: I have used docker.for.mac.localhost as I use docker for mac, but if use Docker for windows then use docker.for.win.localhost. For toolbox or linux you will add the IP of your machine
Since it is docker, we probably want to keep port fixed instead of dynamic like I did. Now we build the docker file and run it.
This will open a popup in pycharm, click autodetect to detect the source mappings
And then you will have your code breakpointed at the main line of your file
On my Windows 10 machine, I am developing a database manager. Because the backend uses LDAP and the required development libraries are only available for Linux, I want to use Docker to set up an environment with the appropriate libs.
I managed to write a Dockerfile and compose file, that launch the (currently very basic) Django app in a Docker container with all the libs necessary.
I would like to play around with the django-ldapdb package and for that I want to apply the migrations.
When I open PyCharm's terminal and try to execute python manage.py migrate, I get an error telling me that the module ldapdb is not found. I suspect this is because the command does not use the remote Docker interpreter I set up with PyCharm.
The other thing I tried is using PyCharm's dedicated manage.py console. This does not initialize properly. It says the working directory is invalid and needs to be an absolute path, although the path it shows it the absolute path to the project.
I have to admit that I have no idea how this remote interpreter works and I don't see any Docker container running, so I might have not understood something properly here. I even tried running the app using PyCharm's Django run config, which started a container, but still I get the same errors.
I googled a lot, but I couldn't find more infos about remote interpreters nor something solving my issue.
The only way I managed to do this, is by executing the command inside the container.
To get inside a container named contr, use the docker command
docker exec -ti contr /bin/bash