Pycharm Couldn't connect to console process when using remote docker interpreter - python

I am trying to run my code within a docker container hosted on an AWS EC2 machine.
It seems that PyCharm can connect to the interpreter because it can show the list of installed packages when looking at the interpreter configuration.
However, when I try to open a Python console, or when I try to run a Python script, I have the error:
3987f6fc2476:/usr/bin/python3 /opt/.pycharm_helpers/pydev/pydevconsole.py --mode=server --port=55516
Couldn't connect to console process.
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
Happy to provide more information. What is possibly going wrong here? The error seems pretty generic.
EDIT: PyCharm can start the docker container but still the Python console won't work. On the server, docker ps returns:
ecd6a7220b55 9e1ad5b17633 "/usr/bin/python3 /o…" 1 second ago Up Less than a second 22/tcp, 0.0.0.0:50219->50219/tcp dreamy_matsumoto

Turns out that the issue with that PyCharm used a random port every time it starts a Python console when connecting to remote docker container. If we could open all the inbound ports on the EC2, this feature would work. Of course there is nothing worst from a security perspective. Do NOT do this. (but if you really want to do it you'll need to setup docker through TCP)

Related

will python script running inside a kubernetes pod exit if my laptop/system turned off

I am running a python script inside a kubernetes pod with kubectl exec -it bash.Its a long running script which might take a day to complete.i executed the python script from my laptop inside the kubernetes pod.
If i close my laptop,will the script stop running inside the pod?
If you are running Kubernetes on Cloud, the script will continue until it is finished succefully or throws an error even if you close your laptop,
Othewise, if you are running local Kubernetes Cluster, for example: with minikube, cluster will shut down and so is your script
It's not possible to know the answer without at least the following information:
laptop OS (including distribution and version)
whether your k8s is running directly on your laptop or on remote hardware
I'll assume you're running linux. If you are running a productive k8s locally on your laptop (in which case, why?), then you likely have to change the settings in your desktop environment, or temporarily disable acpid, or your virtualised cluster will cease to exist when the power turns off. All of the former is completely dependent on your hardware and software.
If the process is running remotely (on other hardware), turning off your laptop will not make a difference to the running script. Read the man page for kubectl-exec:
-i, --stdin=false
Pass stdin to the container
-t, --tty=false
Stdin is a TTY
The arguments for an interactive shell are just about mapping stdin to the container; kubectl won't kill your remote process if your laptop turns off, loses network connectivity etc.

Pycharm debugger "waiting for connections" but running still working

I am trying to debug django project with docker-compose interpreter.
Here is my pycharm configurations
But when I trying to debug it project still running but debugger is still waiting for connection and break point not working
I think that structure of my project have problem cause i'm try to debug other project it still working.
Here is my project structure
What am i doing wrong?
To whomever else it might help, the problem in my case was that I attempted to use the debugger coupled with a run inside a Docker container functionality.
I also happened to have all ports published on that container which prevented the debugger to connect. Publishing only ports I actually needed, resolved the problem.
Check the running ports on your machine. In my case, the port that PyCharm wanted to use for debugging (127.0.0.1:xxxx) was being used by another program on my laptop.
You can check the running ports using the following command on mac:
lsof -i -P | grep -i "listen"
Or, the following command once you know which port PyCharm is trying to use (usually you can see that on top of the console tab of PyCharm after starting debugging process):
sudo lsof -i :xxxxx
After running that, you should see a list with PID numbers, names of program etc. Then you can kill the running process on that port using the PID:
sudo kill -9 PID
Or, just restart your computer.
If that doesn't work, then it might be due to using already-existing Python module names. Make sure the names of the Python files in your project isn't the same as any other library/code from python.

Running a Python debugger in a Docker Image

I recently followed the following tutorial to try to debug python code in a Docker container using VSCode:
https://www.youtube.com/watch?v=qCCj7qy72Bg&t=374s
My Dockerfile looks like this:
FROM ubuntu as base
#Do standard image stuff here
#Python Debugger
From base as debugger
RUN pip3 install debugpy
ENTRYPOINT ["python3","-m","debugpy","--listen","0.0.0.0:5678","--wait-for-client"]
I have alternately tried copying the tutorial exactly and using the following ENTRYPOINT instead:
ENTRYPOINT ["python3","-m","debugpy","--listen","0.0.0.0:5678","--wait-for-client","-m"]
I have also configured a VSCode remote attach debug instance to launch.json:
{"name":"Python: Remote Attach","type":"python","request":"attach","connect":{"host":"5678","port":5678},"pathMappings":[{"localRoot":"${workspaceFolder}","remoteRoot":"."}]},
I want the debugger to either debug the current file alone in isolation, or run a file I use to run the entire project, called init.py with the debugger in the docker container.
Currently, when I build and run the docker container with
docker run -p 5678:5678 CONTAINERNAME python3 /home/init.py
It hangs and times out on the Visual Studio side.
In the video, he uses this to run the python unittest module, which is why I tried taking out the -m from the end of the command in my modified version. However, it looks like debugpy doesn't know what to do. I have tried running the docker instance before the remote debugger, or the remote debugger after the docker instance, but the error remains and the debug does not work. How can I remote debug into a docker instance using VSCode?
EDIT:
Thank you FlorianLudwig for pointing out that my original code used commas for the IP rather than the periods required.
I have edited the question to reflect this change. It removed issues where python complained about a malformed address, but it seems I am still having some sort of connection issue to the debugger.
EDIT2:
I think I figured out what caused the connection issue. It appears the visual studio default is to use the same host as the port number in question. I changed my host to 0.0.0.0 and I was able to debug by running the container then connecting to it via Visual Studio Debugging.
In your Dockerfile:
"0,0,0,0:5678" should be "0.0.0.0:5678"
To make it a valid ip address. 0.0.0.0 basically means "any" ip address.

run python script in DigitalOcean

I'm having troubles to run a python script in DigitalOcean.
I have two doubts.
How to upload the scripy.py to DigitalOcean droplet.
How to run the script.
I'm able to access to the console, but further that I don't know what to do and i can't find any specific information on internet.
I'm running a Ubuntu 14.4 Droplet through web.
Ok first, in order to upload any file to your droplet you can user the command scp
scp foobar.txt your_username#remotehost.edu:/some/remote/directory
Here is a related question that shows you how to use scp from Windows.
Then in the console setup in the remote host check if you can run the command python. If you do not have it, just follow the steps in the documentation and you will have python running inside your remote machine.
If you put a Python script on the server and ssh in, you can run it from the command line. For instance,
python yourFantasticScript.py
If you want a level of automation to triggering the script to run, you will need to learn more about automation scheduling and server technologies.

Python code crashes with "cannot connect to X server" when detaching ssh+tmux session

I run Python code on a remote machine (which I ssh into) and then use Tmux. The code runs fine UNTIL I disconnect from the remote machine. The whole point of my connecting via Tmux is so that the code continues to run even when I'm not connected to the remote machine. When I reconnect later, I have the error message:
: cannot connect to X server localhost:11.0
Does anyone have an idea why this is happening or how I can stop it?
cannot connect to X server localhost:11.0
...means that your code is trying (and failing) to connect to an X server -- a GUI environment -- presumably being forwarded over your SSH session. tmux provides session continuity for terminal applications; it can't emulate an X server.
If you want to stop it from being able to make any GUI connection at all (and perhaps, if the software is thusly written, from even trying), unset the DISPLAY environment variable before running your code.
If this causes an error or exception, the code generating that is the same code that's causing your later error.
If you want to create a fake GUI environment that will still be present, you can do that too, with Xvfb.
Some Linux distributions provide the xvfb-run wrapper, to automate setting this up for you:
# prevent any future commands in this session from connecting to your real X environment
unset DISPLAY XAUTHORITY
# run yourcode.py with a fake X environment provided by xvfb-run
xvfb-run python yourcode.py
By the way, see the question xvfb-run unreliable when multiple instances invoked in parallel for notes on a bug present in xvfb-run, and a fix available for same.
If you want an X server you can actually detach from and reattach to later, letting you run GUI applications with similar functionality to what tmux gives you for terminal applications, consider using X11vnc or a similar tool.

Categories

Resources