Everything worked fine when I ran it on Docker, but after I migrated it to Kubernetes it stopped connecting to the DB. It says:
pymongo.errors.ServerSelectionTimeoutError
pymongo.errors.ServerSelectionTimeoutError: connection closed
whenever I try to access a page that uses the DB.
I connect like this:
app.config['MONGO_DBNAME'] = 'pymongo_db'
app.config['MONGO_URI'] = 'mongodb://fakeuser:FakePassword#ds1336984.mlab.com:63984/pymongo_db'
Any way to get it to connect?
Edit:
I think it has more so to do with the Istio sidecars as when deployed on Kubernetes minus Istio, it runs normally. The issue only appears when running Istio.
Most likely Istio (the Envoy sidecar) is controlling egress traffic. You can check if you have any ServiceEntry and VirtuaService in your cluster for your specific application:
$ kubectl -n <your-namespace> get serviceentry
$ kubectl -n <your-namespace> get virtualservice
If they exist, check if they are allowing traffic to ds1336984.mlab.com. If they don't exist you will have to create them.
Related
I can connect to postgres in that way.
On my local machine i run:
ssh name#ip -p 22
input --> password
and then
sudo docker-compose exec postgres bash
after that i have full access to my postgres db.
how can i connect to that DB with python?
I know library like psycopg2, but i didn't found any example how to connect to db which is on another server and with docker ot run.
There are three layers here.
Your local machine.
The server.
The container running the database.
Between each layer, there is a gap. Between
(1) and (2) you have the Internet. Between
(2) and (3) you have docker networking.
Now, what you described in the question is this.
You first cross the (1)-(2) gap with SSH then
you cross the (2)-(3) with the command
sudo docker-compose exec postgres bash.
Now for your question in the comment, according to
docker documentation, docker-compose exec <container-name or id> <command> will
run a command in a container, and sudo elevate your privilege to root account.
Since the command is bash, you essentially open an interactive shell of the
container.
Now this method of crossing the two gaps will work, and you observed, but for
psycopg2 library, it will not.
Again with docker documentation, you can tell docker to eliminate the (2)-(3) gap
for you, this is mainly known as publishing a port. You can tell docker to map
a port on the server to a port on the container, so the (2)-(3) gap can be eliminated.
At this point, the connection to a port on the server will be passed to the container
at the defined port.
Now the only gap you need to cross is just (1)-(2) which can now be done by psycopg2
easily (given that the firewall is allowing inbound connection on that said port).
Now, the detail on how to tell docker to eliminate the (2)-(3) gap is in the answer to Connecting to Postgresql in a docker container from outside. It also show you how you can connect to the database with psql directly from your local machine.
I am running Ubuntu 18.04,
and am following this tutorial to make a flask server.
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04
And if I log off, and try to log back in,
I am unable to SSH into my instance,
and it give me this error:
Connection via Cloud Identity-Aware Proxy Failed
Code: 4003
Reason: failed to connect to backend
You may be able to connect without using the Cloud Identity-Aware Proxy.
And I have tried creating an instance from an image of the original.
I've tried adjusting my firewall, and then ssh into another port.
I've tried to connect without using the Cloud Identity-Aware Proxy.
And it happens every time I set up a new machine AFTER I set up Nginx.
There are some other people on here who have encountered the same problem like Error 4003: can't ssh login into the instance that i created in google cloud platform
and
Can't SSH into Google Cloud VM
but neither thread has really any helpful answers. Has anyone who's encountered this been able to fix it?
Turns out the issue was the firewall.
I ufw, but forgot to allow ssh connections, and I locked myself out.
I created an entirely new machine, and allowed 22 and ssh from the get go.
I have a flask application using bokeh that is running in a Docker container, and it works when I use it on local machines.
However, when I deploy it to a GCP instance, even though I can reach the server, I have some AjaxDataSource() objects which are failing to connect.
Some details,
All the machines, local and gcp vm are running Ubuntu 18.04
The flask app is started like this,
app.run(host="0.0.0.0", port=6600, debug=False)
The Ajax route looks like this,
http://127.0.0.1:6600/land/tmidemo/data_rate?name=ResultBaseKeysV1
The GCP firewall rules look like,
Name Type Targets Filters Protocols / ports Action Priority Network
tmiserver-egress Egress Apply to all IP ranges: 0.0.0.0/0 tcp:6600 udp:6600 Allow 1000 default
tmiserver-ingress Ingress Apply to all IP ranges: 0.0.0.0/0 tcp:6600 udp:6600 Allow 1000 default
The docker container is run like this,
docker run --net tminet --hostname=TEST -p 6600:6600 -v $(pwd):/app/public --name myserver --rm myserver
I am not using a Bokeh server. The AjaxDataSource() calls point back to the flask application, not another (bokeh) server
There is a lot that works,
able to use the GCP external ip address and reach the server
going from web page to web page works, so flask routing is working
Whats NOT working is that Ajax() call which uses 127.0.0.1, although this DOES work when I run the container on a local machine.
The error I see in the inspect window is ERR_CONNECTION_REFUSED
The GCP instance hosts.conf DOES include a line for 127.0.0.1 localhost
I tried (from here) on the GCP VM instance, same result,
iptables -A INPUT -i docker0 -j ACCEPT
I also tried (from here) changing the Docker run network to --net="host" and the result is identical.
I also tried adding --add-host localhost:127.0.0.1 to the Docker run command, same result.
I think the problem is configuring the GCP to know how to route a request to 127.0.0.1, but I don't know where to check, configure this, beyond what I have already done.
I wasn't able to specifically resolve the issue I was having, but I tried a different approach to the URL for the AjaxDataSource() and it worked and I think a better approach...
I used Flask url_for() function to create a link to the route that the AjaxDataSource() needs and this worked. The resulting link looks something like,
/land/tmidemo/data_rate/ResultBaseKeysV1
ie, no http://127.0.0.1, and this seems to work in all cases, my dev environment and GCP.
I think I tried this a long time ago and it didn't work, because I use "flask" URLs all over the place, but for some reason I thought I needed "http://127.0.0.1" for the Ajax stuff. Its works now.... moving on!
How can you make your local Django development server think it's running inside your AWS network using SSH tunneling?
My scenario, I'm running a local Django server i.e. python manage.py runserver and Redis as cache backend (Elasticache). When my app runs in the AWS environment it has access to Elasticache, but, Locally it won't (and that's a good thing). If for some reason I want to test my local environment with Elasticache I need to somehow use SSH tunneling to make AWS think it's running inside the VPC network.
I've tried to get this working by using below. I've confirmed I can connect locally using SSH tunneling with Redis Desktop Manager so 100% I know AWS supports this, my problem is now doing the same thing with Django.
This is what I've tried:
> python manage.py runserver 8000
> ssh -i mykey.pem ec2-user#myRandomEC2.com -L 6379:localhost:6379
I get the message "Error 60 connecting to" message when I visit http://127.0.0.1:8000/.
What I'm I doing wrong here?
Notes:
ec2-user#myRandomEC2.com is not the Redis server, just another
EC2 instance on AWS that has access Elasticache that I want to use as a
tunnel.
The mykey.pem has access and the correct permission.
The ec2 instance has all the correct permissions and ports for access.
Tested SSH tunneling with Redis Desktop Manager and this works for
that software.
Elasticache and the EC2 instances are all in the same region and can connect to each other.
ssh -i mykey.pem ec2-user#myRandomEC2.com -L 6379:localhost:6379
This will forward requests from your local machine (on :6379) to localhost:6379 on the EC2 instance. This is not what you want (unless you have redis running locally on the EC2 instance)
You should use the Elasticache IP instead
ssh -i mykey.pem ec2-user#myRandomEC2.com -L 6379:<elasticache-ip>:6379
I've got a django project using django-redis 3.8.0 to connect to an aws instance of redis. However, I receive ConnectionError: Error 111 connecting to None:6379. Connection refused. when trying to connect. If I ssh into my ec2 and use redis-py from the shell, I am able to read and write from the cache just fine, so I don't believe it's a security policy issue.
Ok, figured it out. What I needed to do was prefix my location with redis://. This is specific to the django-redis library and how it parses the location url. That explains why when I manually set up a StrictRedis connection using the python redis library I was able to connect.
If you are running elastic-cache redis, you can't access it from outside AWS - that is why you are getting the error.
From AWS FAQ:
Please note that IP-range based access control is currently not
enabled for Cache Clusters. All clients to a Cache Cluster must be
within the EC2 network, and authorized via security groups as
described above.
http://aws.amazon.com/elasticache/faqs/