I'm following the AWS tutorial Build a Modern Web Application - [Python].
I'm at Module 2B: Deploy A Service With AWS Fargate #step B:Test The Services Locally
I run my docker image with success with:
docker run -p 8080:8080 xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/mythicalmysfits/service:latest
When I preview the website on AWS cloud9 I get the following error.
Oops, VFS connection does not exist
I've tried the following:
created new docker image on different region
checked the flask app routing (all good)
double checked my account id
checked AWS documentation
All of this and I can't figure out what is going on with the error. Am I missing something?
Do not run Cloud 9 in a browser in Incognito mode.
Remove add blockers
Check the docker run output for errors
I found the solution and that is to open the console in chrome and do the docker run there.
Here's what it looked like for me
I found the solution!
- Ad Blockers!
As soon I disable them, It worked
My solution is to use the same (normal) browser, not private mode/incognito window.
Related
I'm on a Windows 10 machine. I have GPU running on the Google Cloud Platform to train deep learning models.
Historically, I have been running Jupyter notebooks on the cloud server without problem, but recently began preferring to run Python notebooks in VS Code instead of the server based Jupyter notebooks. I'd like to train my VS Code notebooks on my GPUs but I don't have access to my google instances from VS Code, I can only run locally on my CPU.
Normally, to run a typical model, I spin up my instance on the cloud.google.com Compute Engine interface. I use the Ubuntu on the Windows Subsystem for Linux installation and I get in like this:
gcloud compute ssh --zone=$ZONE jupyter#$INSTANCE_NAME -- -L 8080:localhost:8080
I have tried installing the Cloud Code extension so far on VS Code, but as I go through the tutorials, I always sort of get stuck. One error I keep experiencing is that gcloud won't work on anything EXCEPT my Ubuntu terminal. I'd like it to work in the terminal inside VS Code.
Alternatively, I'd like to run the code . command on my Ubuntu command line so I can open VS Code from there, and that won't work. I've googled a few solutions, but they lead me to these same problems with neither gcloud not working, nor code . working.
Edit: I just tried the Google Cloud SDK installer from https://cloud.google.com/sdk/docs/quickstart-windows
and then I tried running gcloud compute ssh from the powershell from within VSCODE. This is the new error I got:
(base) PS C:\Users\user\Documents\dev\project\python> gcloud compute ssh --zone=$ZONE jupyter#$INSTANCE_NAME -- -L 8080:localhost:8080
WARNING: The PuTTY PPK SSH key file for gcloud does not exist.
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
ERROR: (gcloud.compute.ssh) could not parse resource []
It still runs from Ubuntu using WSL, I logged in fine. I guess I just don't know entirely enough about how they're separated, what's shared, and what is missing, and to how to get all my command lines using the same stuff.
It seems as if your ssh key paths are configured correctly for your Ubuntu terminal but not for the VS Code one. If your account is not configured to use OS Login, with which Compute Engine stores the generated key with your user account, local SSH keys are needed. SSH keys are specific to each instance you want to access and here is where you can find them. Once you have find them you can specify their path using the --ssh-key-file flag.
Another option is to use OS Login as I have mentioned before.
Here you have another thread with a similar problem than yours.
I'm actually learning to use the Django framework with PostgreSQL with Docker and docker-compose.
Regularly, when I make a mistake (for example a syntax error in the views.py file), I cannot reach my Django app anymore trough my web browser.
Firefox tells me:
Unable to connect
Firefox can't establish a connection to the server at localhost:8000
Chrome tells me:
This site can’t be reached
localhost refused to connect.
ERR_CONNECTION_REFUSED
I had this several times and I always managed to find the error in my code, to correct it and then everything went well again.
Currently, my code is working fine. But if I encounter this again (and this happens very often), I would like to be able to find the error quickly by myself.
So here is my question:
How can I see which file at which line contains the error ?
I would like to have a correct error message telling me what went wrong instead of that annoying ERR_CONNECTION_REFUSED browser page over and over.
I hope I explained my issue well because I struggled to describe it to Google.
Thanks a lot in advance. :)
FYI:
Ubuntu 18.04.3 LTS Bionic (window manager i3wm)
Docker 19.03.4
docker-compose 1.17.1
python 3.7 (docker image)
Django 2.2.6 (inside the python 3.7 image)
PostgreSQL 12.0 (docker image)
Visual Studio Code 1.39.2
I finally found a solution.
I had the bad habit to run my docker-compose in detached mode.
When attached, the syntax errors are shown directly from the terminal when de container is stopped.
I also added a script where I run my server in a loop. This way, the server will relaunch automatically over and over until I correct the error. I don't have to restart my Django server manually.
Thank you for helping me anyway.
I have a flask application using bokeh that is running in a Docker container, and it works when I use it on local machines.
However, when I deploy it to a GCP instance, even though I can reach the server, I have some AjaxDataSource() objects which are failing to connect.
Some details,
All the machines, local and gcp vm are running Ubuntu 18.04
The flask app is started like this,
app.run(host="0.0.0.0", port=6600, debug=False)
The Ajax route looks like this,
http://127.0.0.1:6600/land/tmidemo/data_rate?name=ResultBaseKeysV1
The GCP firewall rules look like,
Name Type Targets Filters Protocols / ports Action Priority Network
tmiserver-egress Egress Apply to all IP ranges: 0.0.0.0/0 tcp:6600 udp:6600 Allow 1000 default
tmiserver-ingress Ingress Apply to all IP ranges: 0.0.0.0/0 tcp:6600 udp:6600 Allow 1000 default
The docker container is run like this,
docker run --net tminet --hostname=TEST -p 6600:6600 -v $(pwd):/app/public --name myserver --rm myserver
I am not using a Bokeh server. The AjaxDataSource() calls point back to the flask application, not another (bokeh) server
There is a lot that works,
able to use the GCP external ip address and reach the server
going from web page to web page works, so flask routing is working
Whats NOT working is that Ajax() call which uses 127.0.0.1, although this DOES work when I run the container on a local machine.
The error I see in the inspect window is ERR_CONNECTION_REFUSED
The GCP instance hosts.conf DOES include a line for 127.0.0.1 localhost
I tried (from here) on the GCP VM instance, same result,
iptables -A INPUT -i docker0 -j ACCEPT
I also tried (from here) changing the Docker run network to --net="host" and the result is identical.
I also tried adding --add-host localhost:127.0.0.1 to the Docker run command, same result.
I think the problem is configuring the GCP to know how to route a request to 127.0.0.1, but I don't know where to check, configure this, beyond what I have already done.
I wasn't able to specifically resolve the issue I was having, but I tried a different approach to the URL for the AjaxDataSource() and it worked and I think a better approach...
I used Flask url_for() function to create a link to the route that the AjaxDataSource() needs and this worked. The resulting link looks something like,
/land/tmidemo/data_rate/ResultBaseKeysV1
ie, no http://127.0.0.1, and this seems to work in all cases, my dev environment and GCP.
I think I tried this a long time ago and it didn't work, because I use "flask" URLs all over the place, but for some reason I thought I needed "http://127.0.0.1" for the Ajax stuff. Its works now.... moving on!
I'm trying to run a bokeh server within a Docker container but bokeh doesn't allow me to enter commands while the server is running. Is there a way to run the server detached so that I can enter other commands while the page is up? I'm using a (slightly modified) ubuntu image with python3 for this container.
If anyone happens to also know why I wouldn't be able to access the page from my host machine after exposing the ports that'd be even better-that's the larger issue I'm trying to solve.
You can use this line:
bokeh serve --show --allow-websocket-origin=localhost:5006 file_name.py
Put the following at the end of the dockerfile to run the command above and be able to access the app your trying to host:
CMD ["bokeh","serve","--show","--allow-websocket-origin=localhost:5006","file_name.py"]
I have a remote machine at my workplace, when we developers run server/ or docker containers. everything was working fine but a while back somethign went wrong.
if I run the python flask app
from app import app
app.run(host='0.0.0.0', port=5050)
i get message
* Running on http://0.0.0.0:5050/
and I am able to access the above from my local machine using the remote server machine ip:5050 but if I run docker container docker run -itd <conta_image_name> -p 80:90 --add-host=localdomain.com:machine_ip_address i get error message saying IPv4 forwarding is disabled. Networking will not work.
Now this issue is in production so I really need someone to throw up some light, what might be wrong or let me know what more info I need to put.
I have fixed this issue myself following this: https://success.docker.com/article/ipv4-forwarding
Another solution is..
Try adding -net=host along with docker run command
https://medium.com/#gchandra/docker-ipv4-forwarding-is-disabled-8499ce59231e