python Falcon changing domain name for Request URL - python

For an API I am writing in Falcon (python) I am trying to have the API hosted on a different domain name than localhost:8080. I want to be able to request at http://mydomainname rather than http://localhost:8080. How do I set up my API to do that when using Falcon to develop my API. Let me know if you can help.
Thanks!

As a WSGI/ASGI Python application framework, Falcon does not have much control of how your server is reached by domain name.
Your server IP address is normally resolved by a DNS nameserver, so "hosting on a domain" is essentially making sure that (1) an application server is bound to 1 or more IP addresses, and (2) DNS queries resolve the domain name in question to these IP address(es). See also, e.g., How to attach domain name to my server?
That said, an HTTP 1.1+ request normally does include the domain name via the HTTP Host header; a Falcon application can access this information via Request.host, Request.uri, etc. If you expect your application being accessed with different Host header values, you can use these Request properties to differentiate between different domains.

Related

Google Cloud Storage Bucket is not associated with CNAME on DNS record for using my domain as origin

I intend to use Google Cloud Storage through my own domain name supereye.co.uk .
However,
When I try to associate CNAME on my DNS record for supereye.co.uk with the Google Cloud Bucket production-supereye-co-uk, I get the following message when I try to access to
production-supereye-co-uk.supereye.co.uk/static/default-coverpng :
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
</Error>
What shall I do ?
Important note : This is not a static site. This is a Django Application that runs on Google Cloud Engine and Django has its own URL routing mechanism. It means Django translates everything after supereye.co.uk/URL , I wonder how this should work.
So you cannot simply just add a CNAME record to redirect some traffic to a give URL.
You are going to have to do one of the following to get your desired result:
Serve traffic to a new sub domain data.supereye.co.uk which will host your content.
Proxy data through your django app, this is not ideal but would allow you to easily protect your data with authentication or authorization.
Proxy content through nginx, using nginx you proxy the request (forward) through to your cloud bucket. Light weight and fairly simple to implement.
Use a gcp loadbalance to split the traffic, you can setup a LB to split requests between the backend group and the bucket using host/ path rules.
I would either got for the LB or the nginx proxy as these will be the easiest to implement (depending on your setup). If you want any form of access control go for proxying the request through your django app.

Outbound IP for Python Azure Function - requests IP not match list of function outbound IP

I ve created Python function on Azure that call external API service which allows to access only for whitelisted IPs.
Based on Microsoft documentation (https://learn.microsoft.com/en-us/azure/azure-functions/ip-addresses) I found all OutboundIPAddresses and PossibleOutboundAddresses and whitelisted all of them. Despite of IPs have been whitelisted I keep receiving 403 Forrbiden error from the service.
I also verified IP address of the request by adding following code to the function:
ip = requests.request('GET','https://api.ipify.org').text
logging.info('Request send from IP: {}'.format(ip))
It seems that actual outbound IP address is diffrend then specified on the OutboundIPAddress and PossibleOutboundAddresses lists.
I would appreciate your help.
Per our documentation
When a function app that runs on the Consumption plan or the Premium
plan is scaled, a new range of outbound IP addresses may be assigned.
When running on either of these plans, you may need to add the entire
data center to an allowlist.
If you need to add the outbound IP addresses used by your function apps to an allowlist, another option is to add the function apps' data center (Azure region) to an allowlist. You can download a JSON file that lists IP addresses for all Azure data centers. Then find the JSON fragment that applies to the region that your function app runs in.

HTTPS on EC2 instance running python project

I'm having considerable difficulty getting HTTPS to resolve on my EC2 instance, which runs a python project. The request just times out (ERR_CONNECTION_TIMED_OUT). HTTP runs ok, however. The steps I've taken are as follows.
I've created a certificate in ACM for the following domains: *.mywebsite.com and mywebsite.com
I've setup Route 53 as follows:
Routing policy on the A records is Simple.
I've gone into the Listener for my Load Balancer for my EC2 instance and CHANGED the port from 80 (HTTP) TO 443 (HTTPS) and added my certificate.
Note: the "Forward To" is a Target Group running on port 80 (HTTP). I've read that this is correct.
I've then gone into the Inbound Rules for my Security group, and added HTTPS
At this point, I've got the following questions:
a) Given that this is a python/Django project, is enabling HTTPS for EC2 possible to do this through the aws website or do I need to add config files and deploy to my instance?
b) Do I need to create a target group running on HTTPS?
c) Do I need listeners on my load balance for port 80 and port 443 or just port 443?
d) On my security group, do I need port 80 to go to 0.0.0.0/0 and ::0/?
e) Should the A record by the DNS name of the load balancer or should it be the CNAME of my environment?
Thanks for your help! Once we get the answer here, I'm going to write a guide and post it on youtube.
Let me start by giving you a little bit of overview of how a request flows in this case.
As you have rightly guessed, the Load Balancer, Application Load Balancer to be specific can handle SSL traffic. This also means that from the Load Balancer to the origin server, the mentioned target group in this case, only http traffic will flow and not https. So You don't have to worry about handling certificates on the server. The response from the origin server is then again wrapped up in an SSL tunnel and send back to the client by the ALB.
This means that your end user should be able to connect to the Load Balancer port 443 atleast and also on port 80 (which can redirect to 443).
This means the security group of your load balancer should have port 443 (and optionally 80) open to the world, or to your users.
As between the origin server and the ALB, the traffic flows in the port that your app is running, that is what the security group of the server should allow the access to the ALB.
To rephrase, the server (EC2) security group should allow the ALB on whichever port the application is running.
Note: This doesn't have to be 80 or 443, it can also be 8080, as long as your target group knows about it and is forwarding the request on that port.
Now to answer your questions:
a) Given that this is a python/Django project, is enabling HTTPS for EC2 possible to do this through the aws website or do I need to add config files and deploy to my instance?
You don't have to do this. As I mentioned, the encryption/decryption can be offloaded to ALB. Read more about it int he docs here.
b) Do I need to create a target group running on HTTPS?
This builds up on the previous question, no you don't have to. The app server/EC2 instance should not be concerned with this.
c) Do I need listeners on my load balance for port 80 and port 443 or just port 443?
This depends on your use case. The base necessity is to have only 443. If you want to allow users to still land on the http site and then be redirected to a more secure https version, you can again make use of the ALB for this. More about it here.
d) On my security group, do I need port 80 to go to 0.0.0.0/0 and ::0/? For ALB, yes but not for the EC2 instances. Remember that Ec2 never communicates directly with users, only with the ALB. So you can control the traffic on EC2 more tightly.
e) Should the A record by the DNS name of the load balancer or should it be the CNAME of my environment?
Use Alias records. They are much easier to manage, and AWS will take care of the mapping. More about this here.

HTTP request via proxy on AWS server fails with 407

I'm running this brief script locally as well on an AWS EC2 intance in an ECS cluster:
import requests
proxies = {'http':'http://user:pw#host:port','https':'http://user:pw#host:port'}
r = requests.get('http://quotes.toscrape.com/', proxies=proxies)
print r.status_code
When I run the script locally, I get a 200 status code, indicating that I successfully am able to connect to access the website via the proxy.
When I run the script on the AWS instance, I get a 407 proxy authentication error.
This is a common error that others have experienced (e.g. see here). I'm looking for a solution that allows me to authenticate the proxies WITHOUT having to whitelist the instance.
The reason being is that every time I run a new instance, I'd have to whitelist that instance too. I would rather just pass in the credentials to requests and authenticate the proxy directly.
Is this possible?
I would suggest you launch instances in the private subnet, and whitelist your NAT(s) EIP in this case you will only have to whitelist 1-4 IP addresses depending if you are using single NAT or a NAT per AZ (which is recommended).
Hopefully, it does make sense, feel free to ask additional questions

How to identify computers on intranet?

How can we identify distinct computers/devices on an intranet?
This is possible using cookies but that is not foolproof.
I am expecting something on the lines of finding local ip address.
It would be great if you mention some tools(libraries) required to integrate it with an intranet application. The application is designed in Python(Django).
You can get the client (computer connecting to your web server) IP address from the HttpRequest object. If your Django view is def MyView(request): you can get the IP from request.META.get('REMOTE_ADDR'). Is that what you're looking for?
You could take a look at the HttpRequest documentation on Django: https://docs.djangoproject.com/en/dev/ref/request-response/
There you'll find that you can know the remote IP address of the user with the request object on your view or middleware using request.META["REMOTE_ADDR"]
I use this in a multihomed server where the requests for the internal LAN come to a local IP Address and the public requests goes to a public IP Address, there comparing the REMOTE_ADDR to the beginning of my internal LAN address i can know if it is an internal request or not.

Categories

Resources