Pycharm debugger "waiting for connections" but running still working - python

I am trying to debug django project with docker-compose interpreter.
Here is my pycharm configurations
But when I trying to debug it project still running but debugger is still waiting for connection and break point not working
I think that structure of my project have problem cause i'm try to debug other project it still working.
Here is my project structure
What am i doing wrong?

To whomever else it might help, the problem in my case was that I attempted to use the debugger coupled with a run inside a Docker container functionality.
I also happened to have all ports published on that container which prevented the debugger to connect. Publishing only ports I actually needed, resolved the problem.

Check the running ports on your machine. In my case, the port that PyCharm wanted to use for debugging (127.0.0.1:xxxx) was being used by another program on my laptop.
You can check the running ports using the following command on mac:
lsof -i -P | grep -i "listen"
Or, the following command once you know which port PyCharm is trying to use (usually you can see that on top of the console tab of PyCharm after starting debugging process):
sudo lsof -i :xxxxx
After running that, you should see a list with PID numbers, names of program etc. Then you can kill the running process on that port using the PID:
sudo kill -9 PID
Or, just restart your computer.
If that doesn't work, then it might be due to using already-existing Python module names. Make sure the names of the Python files in your project isn't the same as any other library/code from python.

Related

Pycharm Couldn't connect to console process when using remote docker interpreter

I am trying to run my code within a docker container hosted on an AWS EC2 machine.
It seems that PyCharm can connect to the interpreter because it can show the list of installed packages when looking at the interpreter configuration.
However, when I try to open a Python console, or when I try to run a Python script, I have the error:
3987f6fc2476:/usr/bin/python3 /opt/.pycharm_helpers/pydev/pydevconsole.py --mode=server --port=55516
Couldn't connect to console process.
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
Happy to provide more information. What is possibly going wrong here? The error seems pretty generic.
EDIT: PyCharm can start the docker container but still the Python console won't work. On the server, docker ps returns:
ecd6a7220b55 9e1ad5b17633 "/usr/bin/python3 /o…" 1 second ago Up Less than a second 22/tcp, 0.0.0.0:50219->50219/tcp dreamy_matsumoto
Turns out that the issue with that PyCharm used a random port every time it starts a Python console when connecting to remote docker container. If we could open all the inbound ports on the EC2, this feature would work. Of course there is nothing worst from a security perspective. Do NOT do this. (but if you really want to do it you'll need to setup docker through TCP)

Git bash "Watching for file changes with StatReloader" stuck and never loads

I have setup a Django project on a virtual environment on my PC. When using the command
python manage.py runserver 0.0.0.0:8000
Bit Bash stops doing anything and I have to end the program to start over. I have waited several minutes and when I end the session, a dialogue says:
Processes are running in session:
WPID PID COMMAND
14904 1534 c:\Users\mine\AppData\Loca
Close anyway?
I have looked at every related question to this and tried every solution but I cannot get this to work, on or off the virtual environment.
Not sure if this applies, but I also noticed that in my task manager, python3.9.exe appears twice when trying to start the server. The status says running, and the PIDs are different numbers.
This is because something already running on port 8000. You can run the Django server by hosting on another port like 8080 but, you can also kill the already running task on the port. Follow the link to know how exactly you can kill any running process.
It is highly likely that another application is running using the port 8000. Try running the server using another port, say 8088 and share if the same issue persists.
To run on specific port kill all pids which is showing you on Git bash
(ps -ef)... kill them using (kill -9 pid_no)... then run your runserver command.
Ex:
ps -ef
kill -9 123
I ended up paying someone to fix this. Here is the conversation I had with the person who found the issue. Hope this helps someone out there.
Also, I noticed sometimes it still does freeze up on the watching for file changes with StateReloader until I go the browser and type in http://localhost:8000/. Once I do that the rest of the text loads and my browser shows The install worked successfully! Congratulations!
it works, what did you do?
- We have existing processes on port 8000.I think, you had the server running and kept trying over and over. So, I killed process using 8000 port.
- Next, I run Django project again.
- So, now it's working well.
ok. Can you tell me what you did at the start to make it work? Or was it always working?
- I started git bash session again when I started. Maybe git bash session had problems.
But this was going on for a day, I restarted the computer several times and restarted git bash. Same issue.
You must have done something.
- Sorry, I didn't do anything except restart git bash
how did you restart it?
- right click top of window, select NEW
How do I turn server off?
- press Ctrl + C
- And you didn't quit the server exactly before. Maybe you closed git bash directly and server was still live
- So, we had many 8000 ports.
Ok. How can I see what is on that port now?
- You can use this cli
- netstat -ano | findstr :<PORT>
- We don't have running 8000 port now.
I have never found solutions to this problem even with killing ports, changing port etc.
It never display the following part:
Starting development server at http://127.0.0.1:8000/
However, when I use the Git Bash terminal include into vscode, it displays the url.
Terminal > New Terminal > then:
Result :
python manage.py runserver 127.0.0.1:8080

Python process suspends on SSH logout after nohup/screen

I have a remote server through Blue Host that's intended to run a server based on Twisted for Python. The only access I have to it is over SSH, so to keep Python running after I log out I tried using nohup python server.py & and screen -dm python server.py, getting the same results for each. Everything works fine until I log out of SSH - even though Python is running in the background as expected, once I've logged out, my client can no longer communicate with the server. The strange part is that if I log back in over SSH and check the running processes with ps aux, I see Python running and my client can successfully communicate with the server again. Even if I don't type anything at all once I log back in, everything works as expected. But, of course, as soon as I log back out, it's as if the server is gone.
I've contacted support for the hosting service in case this is some oddity on their end, but hopefully this is something that can be resolved on my end instead.
Edit: Looks like Blue Host doesn't want me doing server-y stuff without buying the VPS upgrade so it looks like that's the big problem.
Edit 2: Okay, so in case anybody ends up having a similar problem, here's what the main issue turned out to be. I was mistaken in my original description; I was able to connect to the server but I was getting kicked off immediately for what turned out to be a MySQL error. I guess trying to connect to a localhost database with no active connection somehow causes problems, so instead I changed the MySQL connection command to connect to my site's IP address instead, even though it was the same IP as the server. That seemed to do the trick in terms of my main issue.
Don't use this method to keep the server process running. Instead try using supervisor (apt-get install supervisor). It allows you to daemonize your process, and ability to stop/restart etc.
Here's a sample config entry (/etc/supervisor/supervisord.conf):
[program:my_server]
command=python /path/to/server/server.py
directory=/path/to/server/
autostart=true
autorestart=true
stdout_logfile=/var/log/server.log
stderr_logfile=/var/log/server_error.log
user=your_linux_user_name
After you edit your config, do
sudo service supervisor stop
sudo service supervisor start #need to do this - doing a `restart` doesn't reload the config file!
your server should now be running properly. You can manage its lifecycle via sudo supervisorctl

fabric don't start twisted application as a daemon

I have written a simple automation script for deploying and restarting my twisted application on remote Debian host. But I have an issue with starting using twistd.
I have a run.tac file and start my application as follows inside fabric task:
#task
def start():
run("twistd -y run.tac")
And then just fab -H host_name start. It works great on localhost but when I want to start application on remote host I get nothing. I can see in log file that application is actually launched, but factory is not started. I've also checked netstat -l - nothing is listening my port.
I've tried to run in non-daemon mode, like so twistd -ny run.tac, and, voila, factory started and I can see it in netstat -l on remote host. But that is not the way I want it to work cause it. Any help is appreciated.
There was an issue reported sometime back which is similar to this.
Init scripts frequently fail to start their daemons
init-scripts-dont-work
It also suggested that it seems to succeed with the option pty=False. Can you try and check that?
run("twistd -y run.tac", pty=False)
Some more pointers from the FaQ:
why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang

How do I run Django as a service?

I am having difficulty running Django on my Ubuntu server. I am able to run Django but I don't know how to run it as a service.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Here is what I am doing:
I log onto my Ubuntu server
Start my Django process: sudo ./manage.py runserver 0.0.0.0:80 &
Test: Traffic passes and the app displays the right page.
Now I close my terminal window and it all stops. I think I need to run it as a service somehow, but I can't figure out how to do that.
How do I keep my Django process running on port 80 even when I'm not logged in?
Also, I get that I should be linking it through Apache, but I'm not ready for that yet.
Don't use manage.py runserver to run your server on port 80. Not even for development. If you need that for your development environment, it's still better to redirect traffic from 8000 to 80 through iptables than running your django application as root.
In django documentation (or in other answers to this post) you can find out how to run it with a real webserver.
If, for any other reason you need a process to keep running in background after you close your terminal, you can't just run the process with & because it will be run in background but keep your session's session id, and will be closed when the session leader (your terminal) is terminated.
You can circunvent this behaviour by running the process through the setsid utility. See your manpage for setsid for more details.
Anyway, if after reading other comments, you still want to use the process with manage.py, just add "nohup" before your command line:
sudo nohup /home/ubuntu/django_projects/myproject/manage.py runserver 0.0.0.0:80 &
For this kind of job, since you're on Ubuntu, you should use the awesome Ubuntu upstart.
Just specify a file, e.g. django-fcgi, in case you're going to deploy Django with FastCGI:
/etc/init/django-fcgi.conf
and put the required upstart syntax instructions.
Then you can you would be able to start and stop your runserver command simply with:
start runserver
and
stop runserver
Examples of managing the deployment of Django processes with Upstart: here and here. I found those two links helpful when setting up this deployment structure myself.
The problem is that & runs a program in the background but does not separate it from the spawning process. However, an additional issue is that you are running the development server, which is only for testing purposes and should not be used for a production environment.
Use gunicorn or apache with mod_wsgi. Documentation for django and these projects should make it explicit how to serve it properly.
If you just want a really quick-and-dirty way to run your django dev server on port 80 and leave it there -- which is not something I recommend -- you could potentially run it in a screen. screen will create a terminal that will not close even if you close your connection. You can even run it in the foreground of a screen terminal and disconnect, leaving it to run until reboot.
If you are using virtualenv,the sudo command will execute the manage.py runserver command outside of the virtual enviorment context, and you'll get all kind of errors.
To fix that, I did the following:
while working on the virtual env type:
which python
outputs: /home/oleg/.virtualenvs/openmuni/bin/python
then type:
sudo !!
outputs: /usr/bin/python
Then all what's left to do is create a symbolic link between the global python and the python at the virtualenv that you currently use, and would like to run on 0.0.0.0:80
first move the global python folder to a backup location:
mv /usr/bin/python /usr/bin/python.old
/usr/bin/python
that should do it:
ln -s /usr/bin/python /home/oleg/.virtualenvs/openmuni/bin/python
that's it! now you can run sudo python manage.py runserver 0.0.0.0:80 in virtaulenv context!
Keep in mind that if you are using postgres DB on your developement local setup, you'll probably need a root role.
Credit to #ydaniv

Categories

Resources