I've got a django project using django-redis 3.8.0 to connect to an aws instance of redis. However, I receive ConnectionError: Error 111 connecting to None:6379. Connection refused. when trying to connect. If I ssh into my ec2 and use redis-py from the shell, I am able to read and write from the cache just fine, so I don't believe it's a security policy issue.
Ok, figured it out. What I needed to do was prefix my location with redis://. This is specific to the django-redis library and how it parses the location url. That explains why when I manually set up a StrictRedis connection using the python redis library I was able to connect.
If you are running elastic-cache redis, you can't access it from outside AWS - that is why you are getting the error.
From AWS FAQ:
Please note that IP-range based access control is currently not
enabled for Cache Clusters. All clients to a Cache Cluster must be
within the EC2 network, and authorized via security groups as
described above.
http://aws.amazon.com/elasticache/faqs/
Related
I am trying to deploy flask python application in aws using ecs. My Ecs tasks are running and inside ecs logs, I can see that the server has been started. But still when I issue the public ip in browser, it shows connection refused error. enter image description here
enter image description here
I have added security groups rule but still can't resolve the problem.
Assuming you added the correct security group rule (you didn't provide any actual details about that in your question) your problem is most likely because you have bound Flask to 127.0.0.1. So it will currently only accept network requests from inside localhost (the container). You need to change Flask to bind to 0.0.0.0 so that it will accept requests from anywhere.
I have a server which contains a python file that connects to two external MySQL DB's. One of those DB's can be easily reached while the other server requires that IP's be whitelisted in order to have access to that DB. That server's IP is already whitelisted and works as intended when ran.
The problem arises however when I attempt to run the docker-ized variation of the application. The first DB runs just as it does before but the second DB no longer works. When inside the container, I can ping the second DB and it works, but whenever I try to access it via the code hosted on the server, it doesn't return data within any of the functions that utilize it. I noticed that the container has a separate IP, and may be causing issues since that docker container's IP would not have been whitelisted and may be where the problem begins. I am fairly new to Docker, so any documentation links that would assist me would be extremely helpful.
So for anyone who is dealing with this situation in the future, I added the line
network_mode: "host"
to my docker.compose.yaml file.
Here is some docs related to this: https://docs.docker.com/network/host/
Essentially what was happening is that the container could not be recognized by the whitelist and was not being allowed access to the second DB. With this change, it allowed the container to share the same network as the server it was being hosted on, and since that server has been whitelisted prior, it all worked out of the gate.
If you are using docker, then use
--net=host
within your run command. Here is a SO link about what this addition does:
What does --net=host option in Docker command really do?
I am running Ubuntu 18.04,
and am following this tutorial to make a flask server.
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04
And if I log off, and try to log back in,
I am unable to SSH into my instance,
and it give me this error:
Connection via Cloud Identity-Aware Proxy Failed
Code: 4003
Reason: failed to connect to backend
You may be able to connect without using the Cloud Identity-Aware Proxy.
And I have tried creating an instance from an image of the original.
I've tried adjusting my firewall, and then ssh into another port.
I've tried to connect without using the Cloud Identity-Aware Proxy.
And it happens every time I set up a new machine AFTER I set up Nginx.
There are some other people on here who have encountered the same problem like Error 4003: can't ssh login into the instance that i created in google cloud platform
and
Can't SSH into Google Cloud VM
but neither thread has really any helpful answers. Has anyone who's encountered this been able to fix it?
Turns out the issue was the firewall.
I ufw, but forgot to allow ssh connections, and I locked myself out.
I created an entirely new machine, and allowed 22 and ssh from the get go.
I have an instance of PostgreSQL 11 in GCP as Cloud SQL.
I want to connect pgAdmin to the server but I don't know to what port. Where can I see that?
I don't want to specify my ip adress for the server and I whitelisted all the connections to the server by putting 0.0.0.0/0 as an ip in the gcp console.
Multiple methods can be used to connect Cloud SQL to external applications such as pgAdmin. Here is the documentation covering all the methods and the steps to follow. Since you do not wish to specify your ip address, then using the proxy might be a good alternative. Documentation guidelines for that method can be found here, but here is a quick summary:
Enable the API
Install the proxy client on your local machine
Determine how you will authenticate the proxy
If required by your authentication method, create a service account
Determine how you will specify your instances for the proxy
Start the proxy
Update your application to connect to Cloud SQL using the proxy
When I am running a Python micro-service in a dockerized or kubernetes container it works just fine. But with Istio service mesh, it is not working.
I have added ServiceEntry for two of my outbound external http apis. It seems I can access the url content form inside the container using curl command which is inside service mesh. So, I think the service entries are fine and working.
But when I try from the micro-service which uses xml.sax parser in Python, it gives me the upstream connect error or disconnect/reset before headers though the same application works fine without Istio.
I think it is something related to Istio or Envoy or Python.
Update: I did inject the Istio-proxy side-car. I have also added ServiceEntry for external MySQL database and mysql is connected from the micro-service.
I have found the reason for this not working. My Python service is using xml.sax parser library to parse xml form the internet, which is using the legacy urllib package which initiate http/1.0 request.
Envoy doesn't support http/1.0 protocol version. Hence, it is not working. I made the workaround by setting global.proxy.includeIPRanges="10.x.0.1/16" for Istio using helm. This actually bypass the entire envoy proxy for all outgoing connections outside the given ip ranges.
But I would prefer not to globally bypass Istio.