edX LMS port 8000 already in use (even after killing processes) - python

I got the following issue when I try to run my edX LMS (port 8000):
Error: That port is already in use
So in my vagrant account I found and did kill -9 on process which was using 8000. But as soon as I killed them, the process was automatically restarting and using port 8000 and I am unable to run LMS.

When that happens, I just do:
vagrant reload
(You will have to logout from SSH before by typing logout)
It is equivalent to:
vagrant halt
vagrant up

I've had times on OS/X with Vagrant where I've had to kill not only the vagrant process, but also virtualbox, when vagrant reload hasn't worked.
On your machine (not the guest VM):
ps -eaf | fgrep -i vagrant
ps -eaf | fgrep -i virtualbox
Then kill all those processes and "vagrant up."

vagrant halt is enough to kill all the processes related to the used port.

Related

Problem connecting to a Jupyter Notebook on a remote machine

I can ssh into a remote machine.
I then try to connect to a jupyter notebook job that I started on one of the nodes of the remote machine:
ssh -L 8069:localhost:8069 me#remote.machine ssh -L 8069:localhost:8069 me#node14
This has always worked fine in the past.
When I execute this lately, nothing happens until I eventually get a time out message. If I cancel it and then try to simply ssh into the remote machine again, it again does nothing until I get the error message:
ssh: connect to host remote.machine port 22: Connection timed out
I am trying to figure out if this is a problem at my end or at the remote machine. If it's the latter I can't understand why I am able to ssh to the remote machine fine until I try the
ssh -L 8069:localhost:8069 me#remote.machine ssh -L 8069:localhost:8069 me#node14
connection.
You are trying to do a double ssh connection: one to remote.machine and then another one to node14.
The problem seems to be the ssh process in the node14 machine. So, you can connect to the first machine but no to the second one. Ask your administrator to enable the sshd process in node14
You can test this case by logging into remote.machine via:
ssh -L 8069:localhost:8069 me#remote.machine.
Once you get shell access you can try the connection to node14 via:
ssh -L 8069:localhost:8069 me#node14.
According to the description, this last try should fail with the timeout.

Airflow webserver launched via subprocess not dying on kill

Using Python 3.6.1. I am emulating launching an airflow webserver from the command as a process using subprocess.Popen.
After doing some things, I later move to kill (or terminate) it.
webserver_process = subprocess.Popen(["airflow", "webserver"])
webserver_process.kill()
My understanding is that this will send a SIGKILL to the webserver, whose underlying gunicorn should shutdown immediately.
However, when I navigate to http://localhost:8080 I see that the webserver is still running. Similarly when I then run sudo netstat -nlp|grep 8080 (I am using UNIX, and airflow webserver launches on port 8080), I discover:
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
It's only when I kill the process manually using sudo fuser -k 8080/tcp that it finally dies.
What's going on here?
The python process returned by airflow webserver command actually calls subprocess.Popen to start gunicorn in a subprocess.
You can test this by calling webserver_process.pid, you'll notice that it's a different pid from the gunicorn master process pid.

cannot quit jupyter notebook server running

I am using Jupyter Notebook for a project. Since I ssh into a linux cluster at work I use
ssh -Y -L 8000:localhost:8888 user#host
Then I start the notebook with jupyter notebook --no-browser & so that I can continue using the terminal. Then on my local machine I open to localhost:8000 and go about my work.
My problem is that I forgot several times to close the server by foregrounding the process and killing it with Ctrl-C. Instead I just logged out of the ssh session. Now when I run jupyter notebook list I get
Currently running servers:
http://localhost:8934/ :: /export/home/jbalsells
http://localhost:8870/ :: /export/home/jbalsells
http://localhost:8892/ :: /export/home/jbalsells
http://localhost:8891/ :: /export/home/jbalsells
http://localhost:8890/ :: /export/home/jbalsells
http://localhost:8889/ :: /export/home/jbalsells
http://localhost:8888/ :: /export/home/jbalsells
I obviously do not want all of these servers running on my work's machine, but I do not know how to close them!
When I run ps I get nothing:
PID TTY TIME CMD
12678 pts/13 00:00:00 bash
22584 pts/13 00:00:00 ps
I have Jupyter 4.1.0 installed.
So I found a solution.
Since jupyter notebook list tells you which ports the notebook servers are running on I looked for the PIDs using netstat -tulpn I got the information from http://www.cyberciti.biz/faq/what-process-has-open-linux-port/
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 0.0.0.0:8649 0.0.0.0:* LISTEN
-
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN
-
tcp 0 0 0.0.0.0:33483 0.0.0.0:* LISTEN
-
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN
39125/Xvnc
Without looking too hard I was able to find the ports I knew to look for from jupyter notebook list and the processes running them (you could use grep if it were too hard to find them). Then I killed them with
kill 8337 (or whatever number was associated).
Windows Systems commands on Command Prompt
Be careful to save all the changes made in your notebooks prior to kill the jupyter notebook server process.
i) find the port number used by jupyter notebook server
jupyter notebook list
ex.)
jupyter notebook list
Currently running servers:
http://127.0.0.1:8888/ :: D:\kimkk\Documents\JupyterNotebook
ii) find process ids that use the found port number of jupyter notebook
netstat -ano | find "found portnumber"
ex.)
netstat -ano | find "8888"
TCP 127.0.0.1:8888 0.0.0.0:0 LISTENING 24140
TCP 127.0.0.1:8888 127.0.0.1:55588 ESTABLISHED 24140
TCP 127.0.0.1:8888 127.0.0.1:55612 ESTABLISHED 24140
TCP 127.0.0.1:55588 127.0.0.1:8888 ESTABLISHED 6492
TCP 127.0.0.1:55612 127.0.0.1:8888 ESTABLISHED 6492
find rows with second column value equals to "8888". In above example first, second, and third rows are target rows. In those rows, you can find PID in the last column (ex. 24140).
iii) kill jupyter notebook process with found PID
taskkill /PID found_PID /F
ex.)
taskkill /PID 24140 /F
/F means forcely kill the process.
FYI, Jupyter notebook from version 5.1 supports stop command as follows:
jupyter notebook stop 8888
refer to https://github.com/jupyter/notebook/issues/1950
Use the following command to stop Jupyter notebook running on port 8888:
fuser -k 8888/tcp
This might help:
run jupyter notebook list to get the port number jupyter uses.
run lsof -n -i4TCP:[port-number] to get PID.
The PID is the second field in the output.
run kill -9 [PID] to kill this process.
I ran into the same issue and followed the solution posted above. Just wanted to clearify the solution a little bit.
netstat -tulpn
will list all the active connections.
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 19524/python
you will need the PID "19524" in this case. you can even use the following to get the PID of the port you are trying to shut down
fuser 8888/tcp
this will give you 19524 as well.
kill 19524
will shut down the port
Section 3.3 should be applicable to this.
http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html
When a notebook is opened, its “computational engine” (called the kernel) is automatically started. Closing the notebook browser tab, will not shut down the kernel, instead the kernel will keep running until is explicitly shut down.
To shut down a kernel, go to the associated notebook and click on menu File -> Close and Halt. Alternatively, the Notebook Dashboard has a tab named Running that shows all the running notebooks (i.e. kernels) and allows shutting them down (by clicking on a Shutdown button).
Here's a bash script that will kill ALL active Jupyter notebook servers at one go, based on the answers given by #Joalito and #Hongsoog:
#!/bin/bash
jupyter notebook list | {
while IFS= read -r line
do
port=`echo "$line" | grep -o -P '(?<=localhost:).*(?=/ :)'`
echo "killing jn in port $port"
if [ -z "$port" ]
then
netstat -tulpn | grep "$port" | grep -o -P '(?<=LISTEN ).*(?=/py)' | xargs kill -15
fi
done
}
On your notebook the welcoming page is named "Files" and you can see "Running" next to it. There is where you would want to shutdown and see the running notebooks
What worked for me was
jupyter notebook list, which in my case returned:
http://localhost:8889/?token=77d01d687da830b74eba946060660d :: /gpfs/blah/
http://localhost:8889/?token=1243162854ee3648e3154b26643794 :: /ifs/hello/world/
netstat -tulpn | grep "8888", which in my case returned:
tcp 7 0 127.0.0.1:8888 0.0.0.0:* LISTEN 17602/python3.9
And I found the PID in the last column: 17602.
kill -9 17602, which freed up the port.

fabric don't start twisted application as a daemon

I have written a simple automation script for deploying and restarting my twisted application on remote Debian host. But I have an issue with starting using twistd.
I have a run.tac file and start my application as follows inside fabric task:
#task
def start():
run("twistd -y run.tac")
And then just fab -H host_name start. It works great on localhost but when I want to start application on remote host I get nothing. I can see in log file that application is actually launched, but factory is not started. I've also checked netstat -l - nothing is listening my port.
I've tried to run in non-daemon mode, like so twistd -ny run.tac, and, voila, factory started and I can see it in netstat -l on remote host. But that is not the way I want it to work cause it. Any help is appreciated.
There was an issue reported sometime back which is similar to this.
Init scripts frequently fail to start their daemons
init-scripts-dont-work
It also suggested that it seems to succeed with the option pty=False. Can you try and check that?
run("twistd -y run.tac", pty=False)
Some more pointers from the FaQ:
why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang

open an ssh tunnel from heroku python app on the cedar stack?

Is it possible to open a non-blocking ssh tunnel from a python app on the heroku cedar stack? I've tried to do this via paramiko and also asyncproc with no success.
On my development box, the tunnel looks like this:
ssh -L local_port:remote_server:remote_port another_remote_server
Can you please post the STDERR of ssh -v -L .....? May be you need to disable the tty allocation and run ssh in batch mode.
This recipe ought to work with Python (even though it was for a Rails app). Here's the recipe: https://stackoverflow.com/a/27361295/558639
The biggest challenge is convincing ssh to not prompt when it starts up.

Categories

Resources