I'm running this brief script locally as well on an AWS EC2 intance in an ECS cluster:
import requests
proxies = {'http':'http://user:pw#host:port','https':'http://user:pw#host:port'}
r = requests.get('http://quotes.toscrape.com/', proxies=proxies)
print r.status_code
When I run the script locally, I get a 200 status code, indicating that I successfully am able to connect to access the website via the proxy.
When I run the script on the AWS instance, I get a 407 proxy authentication error.
This is a common error that others have experienced (e.g. see here). I'm looking for a solution that allows me to authenticate the proxies WITHOUT having to whitelist the instance.
The reason being is that every time I run a new instance, I'd have to whitelist that instance too. I would rather just pass in the credentials to requests and authenticate the proxy directly.
Is this possible?
I would suggest you launch instances in the private subnet, and whitelist your NAT(s) EIP in this case you will only have to whitelist 1-4 IP addresses depending if you are using single NAT or a NAT per AZ (which is recommended).
Hopefully, it does make sense, feel free to ask additional questions
Related
I am using the requests library to make google queries.
url='https://google.com/search?hl=en&q='
request_result = requests.get(url+query, headers=headers,proxies=proxies)
I always get the 429 status, when I make a request via proxy. I have tried several proxies, free ones as well as one that is paid and has a dynamically changing IP.
Does this mean that somebody else has made requests via these proxies or is there something I can do to make ti work. If I run through my IP it works fine.
For an API I am writing in Falcon (python) I am trying to have the API hosted on a different domain name than localhost:8080. I want to be able to request at http://mydomainname rather than http://localhost:8080. How do I set up my API to do that when using Falcon to develop my API. Let me know if you can help.
Thanks!
As a WSGI/ASGI Python application framework, Falcon does not have much control of how your server is reached by domain name.
Your server IP address is normally resolved by a DNS nameserver, so "hosting on a domain" is essentially making sure that (1) an application server is bound to 1 or more IP addresses, and (2) DNS queries resolve the domain name in question to these IP address(es). See also, e.g., How to attach domain name to my server?
That said, an HTTP 1.1+ request normally does include the domain name via the HTTP Host header; a Falcon application can access this information via Request.host, Request.uri, etc. If you expect your application being accessed with different Host header values, you can use these Request properties to differentiate between different domains.
I have various python scripts that use the NIFI rest api to make some calls. Locally on my machine and other's local machine's the scripts work. I am trying to run the scripts with through Jenkins. Jenkins is running using on an AWS EC2 instance. The scripts do not work on the Jenkins EC2 instance however they will work on other EC2 instances within the same AWS account and security group. The only way I am able to get the script to work on the Jenkins EC2 instance is using (verify=False) for the rest call. However I need to be able to get it working on Jenkins without (verify=False) given that some of the rest calls I need to make won't work with it.
The certs I am using are two pem files generated from a p12 file we use for NIFI. The certs work everywhere else so I do not think it is an issue with them. I have also tried various Python versions and I still get the same result so I do not think it is that either. I have the public and private ip addresses of the Jenkins server opened up for ports 22, 8443, 18443, 443, 9999, 8080, 18080. So I don't think it is a port issue either. I don't have much experience with SSL so I'm lost on what to ever try next. But given that it works locally and works on the AWS EC2 instance we're running the NIFI dev version on I am out of ideas.
Python Script (the other scripts have the same issue and similar structure):
import json, requests, sys
with open("jenkinsCerts.dat") as props:
certData = json.load(props)
cert = (certData["crt"],certData["key"])
def makeRestGetCall(url):
#calls a RESTful url and returns the response in json format
if "https" in url:
response = requests.get(url, cert=cert)
else:
response = requests.get(url)
print response
with open('servers.txt') as nifi_server_list:
errorCount=0
data = json.load(nifi_server_list)
for server in data:
try:
print "trying: "+server["name"]+" ("+server["url"]+")"
makeRestGetCall(server["url"], verify=False)
except:
print server["name"]+" ("+server["url"]+") did not respond"
errorCount = errorCount + 1
try:
assert errorCount==0
except AssertionError:
print errorCount, " servers did not respond"
The script above doesn't give any error just an output that doesn't work but works on other machines at the same time.
trying: dev-cluster-node-1
dev-cluster-node-1 did not respond
trying: dev-cluster-node-2
dev-cluster-node-2 did not respond
trying: dev-cluster-node-3
dev-cluster-node-3 did not respond
trying: dev-registry
dev-registry did not respond
trying: dev-standalone
dev-standalone did not respond
5 servers did not respond
This is the ERROR I get from Jenkins when I run a different Python script that uses the same authentication from above but the full script was too long to copy and not necessary:
*requests.exceptions.SSLError: HTTPSConnectionPool(host='ec2-***-***-***.****-1.compute.amazonaws.com', port=8443): Max retries exceeded with url: /nifi-api/flow/process-groups/3856c256-017-****-***** (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))*
I believe the issue is that your script isn't aware of the expected public certificates of your NiFi servers in order to verify them during the request.
The crt and key values you are providing should contain the public certificate and private key of the Python script in order to authenticate to the NiFi servers. This material identifies the client in this case, which is required for mutual authentication TLS (one of the various authentication mechanisms NiFi supports).
However, with all TLS handshakes, the server must also provide a public certificate identifying itself and with a CN or SAN matching the hostname serving the connection (e.g. if you visit https://stackoverflow.com, the website needs to present a certificate issued for stackoverflow.com, not andys-fake-stackoverflow.com).
Most websites on the public internet have their certificates signed by a Certificate Authority (Let's Encrypt, Comodo, Verisign, etc.). Your browser and many software components come with a collection of these trusted CAs so that TLS connections work out of the box. However, if the certificates used by your NiFi servers are not signed by one of these CAs, the default Python list is not going to contain their signer, so it won't be able to verify these NiFi certs. You'll need to provide the signing public certificate to your Python code to enable this.
The requests module allows you to configure a custom CA bundle path with the public certificate that signed your NiFi certs. You can obtain that in a variety of ways (you have direct access to the NiFi servers, but any connection [via browser, openssl s_client, curl, etc.] will allow you to obtain the public certificate chain). Store the public cert (nifi_ca.pem) in PEM format somewhere in your Python script's folder structure, and reference it like so:
response = requests.get(url, cert=cert, verify="nifi_ca.pem")
Hi I'm trying to make a test server to get Facebook authentication working.
am using python flask, current app itself is a copy of this oauth sample. nothing else.
so, I have set up everything I could think of:
an ACM to get https working,
a Load balancer(classic) - have set up cert from ACM i made above and the instance I want to connect to.
domain address registered in route 53, set up an alias target(the ELB)
etc.
and after setting this up I went into my ec2 instance using putty, set up an virtual env. and ran the app using the venv. aaaaand the app itself is working fine, except the ELB health check can't pick it up, nor when I try to access by typing in the domain address.
it only works when typing in EC2 instance's public IP. and now I'm stuck here, now knowing how to 'integrate' the instance to my registered domain...
the instance status in the load balance's instance tab shows OutOfService
^ EDIT: the instance tab works, it was due to PORT number. but domain still doesnt responds...
is there anything else I could check to do this? any help is greatly appreciated.... been stuck with this for over a month now :(
EDIT: currently the ELB gets health checks from ec2 instances properly. but I get infinite loading when I try to access by typing in domain name.
Hard to troubleshoot on here- but I would bet that you need to setup the security group so that the load balancer can hit the correct port on your EC2 instance (That is the most common cause of this)
This is likely a question with an easy answer, but i can't seem to figure it out.
Background: I have a python Lambda function to pick up changes in a DB, then using HTTP post the changes in json to a URL. I'm using urllib2 sort of like this:
# this runs inside a loop, in reality my error handling is much better
request = urllib2.Request(url)
request.add_header('Content-type', 'application/json')
try:
response = urllib2.urlopen(request, json_message)
except:
response = "Failed!"
It seems from the logs either the call to send the messages is skipped entirely, or times-out while waiting for a response.
Is there a permission setting I'm missing, the outbound rules in AWS appear to be right. [Edit] - The VPC applied to this lambda does have internet access, and the security groups applied appear to allow internet access. [/Edit]
I've tested the code locally (connected to the same data source) and it works flawlessly.
It appears the other questions related to posting from a lambda is related to node.js, and usually because the url is wrong. In this case, I'm using a requestb.in url, that i know is working as it works when running locally.
Edit:
I've setup my NAT gateway, and it should work, I've even gone as far as going to a different AWS account, re-creating the conditions, and it works fine. I can't see any Security Groups that would be blocking access anywhere. It's continuing to time-out.
Edit:
Turns out i was just an idiot when i setup my default route to the NAT Gateway, out of habit i wrote 0.0.0.0/24 instead of 0.0.0.0/0
If you've deployed your Lambda function inside your VPC, it does not obtain a public IP address, even if it's deployed into a subnet with a route to an Internet Gateway. It only obtains a private IP address, and thus can not communicate to the public Internet by itself.
To communicate to the public Internet, Lambda functions deployed inside your VPC need to be done so in a private subnet which has a route to either a NAT Gateway or a self-managed NAT instance.
I have also faced the same issue. I overcame it by using boto3 to invoke a lambda from another lambda.
import boto3
client = boto3.client('lambda')
response = client.invoke(
FunctionName='string',
InvocationType='Event'|'RequestResponse'|'DryRun',
LogType='None'|'Tail',
ClientContext='string',
Payload=b'bytes'|file,
Qualifier='string'
)
But make sure that you set the IAM policy for lambda role (in the Source AWS account) to invoke that another lambda.
Adding to the above, boto3 uses HTTP at the backend.