I'm having considerable difficulty getting HTTPS to resolve on my EC2 instance, which runs a python project. The request just times out (ERR_CONNECTION_TIMED_OUT). HTTP runs ok, however. The steps I've taken are as follows.
I've created a certificate in ACM for the following domains: *.mywebsite.com and mywebsite.com
I've setup Route 53 as follows:
Routing policy on the A records is Simple.
I've gone into the Listener for my Load Balancer for my EC2 instance and CHANGED the port from 80 (HTTP) TO 443 (HTTPS) and added my certificate.
Note: the "Forward To" is a Target Group running on port 80 (HTTP). I've read that this is correct.
I've then gone into the Inbound Rules for my Security group, and added HTTPS
At this point, I've got the following questions:
a) Given that this is a python/Django project, is enabling HTTPS for EC2 possible to do this through the aws website or do I need to add config files and deploy to my instance?
b) Do I need to create a target group running on HTTPS?
c) Do I need listeners on my load balance for port 80 and port 443 or just port 443?
d) On my security group, do I need port 80 to go to 0.0.0.0/0 and ::0/?
e) Should the A record by the DNS name of the load balancer or should it be the CNAME of my environment?
Thanks for your help! Once we get the answer here, I'm going to write a guide and post it on youtube.
Let me start by giving you a little bit of overview of how a request flows in this case.
As you have rightly guessed, the Load Balancer, Application Load Balancer to be specific can handle SSL traffic. This also means that from the Load Balancer to the origin server, the mentioned target group in this case, only http traffic will flow and not https. So You don't have to worry about handling certificates on the server. The response from the origin server is then again wrapped up in an SSL tunnel and send back to the client by the ALB.
This means that your end user should be able to connect to the Load Balancer port 443 atleast and also on port 80 (which can redirect to 443).
This means the security group of your load balancer should have port 443 (and optionally 80) open to the world, or to your users.
As between the origin server and the ALB, the traffic flows in the port that your app is running, that is what the security group of the server should allow the access to the ALB.
To rephrase, the server (EC2) security group should allow the ALB on whichever port the application is running.
Note: This doesn't have to be 80 or 443, it can also be 8080, as long as your target group knows about it and is forwarding the request on that port.
Now to answer your questions:
a) Given that this is a python/Django project, is enabling HTTPS for EC2 possible to do this through the aws website or do I need to add config files and deploy to my instance?
You don't have to do this. As I mentioned, the encryption/decryption can be offloaded to ALB. Read more about it int he docs here.
b) Do I need to create a target group running on HTTPS?
This builds up on the previous question, no you don't have to. The app server/EC2 instance should not be concerned with this.
c) Do I need listeners on my load balance for port 80 and port 443 or just port 443?
This depends on your use case. The base necessity is to have only 443. If you want to allow users to still land on the http site and then be redirected to a more secure https version, you can again make use of the ALB for this. More about it here.
d) On my security group, do I need port 80 to go to 0.0.0.0/0 and ::0/? For ALB, yes but not for the EC2 instances. Remember that Ec2 never communicates directly with users, only with the ALB. So you can control the traffic on EC2 more tightly.
e) Should the A record by the DNS name of the load balancer or should it be the CNAME of my environment?
Use Alias records. They are much easier to manage, and AWS will take care of the mapping. More about this here.
Related
I finished writing a Django application and now I want to deploy it,
I have a Windows server and have successfully installed Python and Django on it,
Now my app runs on localhost on my windows server,
Now I want to make the site public, meaning that anyone who goes to the IP address of my windows server can browse my site,
Is there a simple way to do this without using IIS?
thank you
Step One
Set a static ip for your server (It's possible without this, but easier)
Once set, log into your router as admin, and forward port 80 to your servers ip address.
There is a tutorial for this at https://portforward.com
Step Two
If you already have a domain name, ignore this bit
Purchase a domain name from an domain name from a domain name registrar such as
1and1 / Ionos (https://ionos.com)
(I would personally advise against https://GoDaddy.com, the prices tend to be odd there)
Step 2.1
Go into your domains settings, and forward traffic to the external ip address of your router.
Hope that this helped!
I've never done what your planing but if you plan to host just one domain then localhost(127.0.0.1) will work just fine but if you plane to host multiple domain you will need to find a way to resolve the right domain to the right site.
Ports to open on both router and Win Server
80 for HTTP
465 for HTTPS
Make sure that you have a Static IP provided by your ISP, you will also need to make sure that your ISP does not Block port 80 if they do ask them to unblock.
If you don't have a static IP or ISP don't allow you to open port 80 then you can use DynDNS to forward traffic to your server, but this option is not the best.
Your Server will also need a static IP to the server as mentiond by Legorooj
I have a Scrapy crawler and I want to rotate the IP so my application will not be blocked. I am setting IP in scrapy using request.meta['proxy'] = 'http://51.161.82.60:80' but this is a VM's IP. My question is can VM or Machine's IP be used for scrapy or I need a proxy server?
Currently I am doing this. This does not throw any error but when I get response from http://checkip.dyndns.org it is my own IP not updated IP which I set in meta. That is why I want to know if I do need proxy server.
The reason you are getting your own IP is because your VM is 'transparent'. You will need to intercept your request at the VM, remove tracking headers such as X-Forwarded-For, and your server has to know who to respond to when it receives the response from the website you are crawling.
The simplest solution though, is to install a proxy service on your VM, for example Squid, then set forwarded_for off to make it an anonymous proxy server. There may be other request options to tweak to make it truly anonymous. Remember to secure the whitelisted IP addresses with http_access allow specialIP and acl specialIP src x.x.x.x in /etc/squid/squid.conf. The default port of Squid is 3128.
Definitely you need a proxy server. meta data is only a field in the http request. the server side still knows the public ip that really connecting from the tcp connection layer.
While I had executed command edit() connecting to managed instance I was ended-up with the following error. How & What I have to do in order to come out of this problem.
wls:/offline> connect('Admin60000','sun1rise','t3://my-comm-app-serv:60001')
Connecting to t3://my-comm-app-serv:60001 with userid Admin60000 ...
Successfully connected to managed Server "MiCommApp" that belongs to domain "MiBeaDir".
Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.
wls:/MiBeaDir/serverConfig>cd('/Servers/MiCommApp/SSL/MiCommApp')
wls:/MiBeaDir/serverConfig/Servers/MiCommApp/SSL/MiCommApp> edit()
Edit MBeanServer is not enabled on a Managed Server.
60001 is managed instance port which is one among the managed instance that runs in admin server. Admin server runs in 60000 port
That is because for managed servers, WLST functionality is limited to browsing the configuration bean hierarchy. Read below excerpt from WL official documentation.
To edit configuration beans, you must be connected to an
Administration Server, and you must navigate to the edit tree and
start an edit session, as described in edit and startEdit,
respectively.
If you connect to a Managed Server, WLST
functionality is limited to browsing the configuration bean hierarchy.
While you cannot use WLST to change the values of MBeans on Managed
Servers, it is possible to use the Management APIs to do so. BEA
Systems recommends that you change only the values of configuration
MBeans on the Administration Server. Changing the values of MBeans on
Managed Servers can lead to an inconsistent domain configuration.
So, basically you need to connect with your Admin server (current you are getting connected with your managed server, as per logs you have provided - Successfully connected to managed Server "MiCommApp" that belongs to domain "MiBeaDir".) and then issue edit configurations using edit() and startEdit() WLST commands.
BTW, I connect to my server using following command:
If HTTPS - connect(url='t3s://abc.xyz.com:37001',adminServerName='AdminServer')
If HTTP - connect(url='t3://abc.xyz.com:37001',adminServerName='AdminServer')
So here's my setup:
IP camera -> Raspberry Pi (Raspbian) -> WiFi -> my server
I am currently using motion to retrieve the camera's stream on my RPi. I am able to view it on the local network (192.168.x.x:8080) through my browser (it's an Mjpeg stream).
I would now like to publish this online so I can view it from http://camera.example.com/ for example.
The difference here is that I would like to do so independently of the WiFi network used (so I cannot simply open a port on my router to accept a connection from the server).
I think this would be possible using WebSockets but I never used them before. Or is there some tool that already exists AND is easy to use ? There are many streaming tools out there, but they all seem to be Windows-GUI programs rather than command line tools.
The choice of language is Python, but if for some reason another language would be more suited that is fine too. Also, I do not need to use motion specifically, so if there is a better alternative that would work too. Thanks !
As a set of minimum steps you will need
A domain name that points to your public IP address
A way of keeping the DNS records for the domain up to date as your IP periodically changes (a free dynamic IP from noip.com will help with the first point, and they have a client you can install which will keep their DNS updated with your current IP)
A port forwarding rule on your router to forward port 8080 (and the stream port for the camera stream, probably 8081 but you can change that in the Motion config) to the internal (192.168.x.x) IP of your Pi
A DHCP reservation in your router to reserve the IP of the Pi (otherwise if the internal IP changes you will need to change the port forwarding rule)
You will now be able to access on the internet via the domain name e.g. http://camera.example.com:8080
BUT...
You have just allowed an insecure http (unencrypted) access into a device on your home network, which could then be exploited (someone could view your cameras, or perhaps gain further access to the Pi and other devices on your network...)
You can enable authentication for the web control gui in Motion config but it’s still being served over http and so easy to hack or to intercept.
So, I would also want to ensure it is all accessible only via https (secure,encrypted).
Items you will need:
an SSL certificate for your domain (available for free from letsencrypt.org)
a web server on the Pi (since Motion doesn’t use any installed webserver but instead has its own inbuilt one) - I’d recommend Nginx or Apache
certbot (to generate/install the certificate on the pi)
configure the web server to be a reverse proxy and serve the http motion website as https using your SSL certificate
secure the website (both apache and nginx support http basic authentication which, if the reverse proxy is configured correctly, will be served over https so encrypted, which is better than unencrypted, base64 encoded (and easily decoded) credential info transmitted in the clear for all to see/intercept).
Other authentication options are available with some extra work but as a bare minimum basic auth and full https are better than nothing.
In my python application, browser sends a request to my server to fetch certain information. I need to track the IP of the source from where the request was made.Normally, I am able to fetch that info by this call :
request.headers.get('Remote-Addr')
But, when I deploy the application behind a load balancer like HaProxy, the IP given is that of the load balancer and not the browser's.
How do I obtain the IP of the browser at my server when it's behind a load balancer?
Another problem with my case is that I am using TCP connection from browser to my server via HAProxy and not using http.
I had this issue with AWS ELB and Apache. The solution was mod_rpaf, which reads the X-Forwarded-For header and replaces it into the standard ip header.
You should check that haproxy is setting the X-Forwarded-For header (which contains the real client IP). You can use modrpaf or another technique to read the real IP.