Python3 proxies http or https? - python

i did Python file that use proxies :
proxies = {
"http": "http://{}:{}".format(proxy, port)
}
auth = HTTPProxyAuth(user, passwd)
session = requests.Session()
session.proxies = proxies
session.auth = auth
response = session.get(link)
i'm curious , when i use http proxy the ssl certificated websites such as https://stackoverflow.com/
know my location .
so should i only use https proxies or there is something i do wrongly ?

The proxies dictionary requires entries for all of the protocols you're interested in.
For example:
proxies = {
"http": "http://{}:{}".format(proxy, port),
"https": "https://{}:{}".format(proxy, port)
}
Would work if you want to use the same proxy for both http and https

Related

ProxyError, when trying to query prometheus behind the proxy

I am coding a module that needs functionality to query Prometheus, when Prometheus is sitting behind proxy and module is making queries from my local environment. My development environment is in Virtual Machine, with correct environment variables and DNS -settings, and is able to talk with the Prometheus behind the proxy for example with accessing the front-end GUI.
I've tested my requests.get() method, when its executed on the network behind the proxy and it is returning the correct values, so I am pretty positive that the proxy is causing the problem, for some reason I dont get the program to respect the proxy dictionary I am feeding to requests. I am using Visual Studio Code and Python 3.9.7.
When executing the code at the bottom of this post, I am getting loads of errors, in which the last one is this one: (Cleared some values, such as the proxy servers, url and query out, due to privacy reasons, they're correct and in-place in my code)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))
Relevant Python Code:
import requests
import json
http_proxy = ''
https_proxy = ''
ftp_proxy = ''
proxies = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
headers = {
'Content-Type': 'application/json',
}
response = requests.get(url='' + '/api/v1/query', verify=False, headers=headers, proxies=proxies, params={'query': ''}).text
j = json.loads(response)
print(j)
Any help is greatly appreciated!
There's are several bugs open regarding the missing or bugged support for no_proxy for Python Requests (ex: #4871, #5000).
The only solution for now AFAICS is coding the missing logic in a function like the following one:
import json
import requests
import tldextract
http_proxy = "http://PROXY_ADDR:PROXY_PORT"
https_proxy = "http://PROXY_ADDR:PROXY_PORT"
no_proxy = "example.net,.example.net,127.0.0.1,localhost"
def get_url(url, headers={}, params={}):
ext = tldextract.extract(url)
proxy_exceptions = [host for host in no_proxy.replace(' ', '').split(',')]
if (ext.domain in proxy_exceptions
or '.'.join([ext.subdomain, ext.domain, ext.suffix]) in proxy_exceptions
or '.'.join([ext.domain, ext.suffix]) in proxy_exceptions):
proxies = {}
else:
proxies = {
"http" : http_proxy,
"https" : https_proxy,
}
response = requests.get(url=url,
verify=False,
headers=headers,
params=params,
proxies=proxies)
return response
url = "https://example.net/api/v1/query"
headers = {
'Content-Type': 'application/json',
}
response = get_url(url, headers)
j = json.loads(response.text)
print(j)

How to connect to a website with proxy python

The code I tried gives no errors but it prints my ip instead of the proxy.
import requests
proxies = {
"https://": "51.79.145.108:3128",
"http://": "51.79.145.108:3128"
}
url="https://httpbin.org/ip"
r=requests.get(url, proxies=proxies)
ip=r.json()
print(ip)

How do I make proxies work with requests?

When I try to run my code, I get an error and I can't understand why. Help!
import requests
import json
proxies = {
"https": "189.113.217.35:49733",
"http": "5.252.161.48:8080"
}
r = requests.get("https://groups.roblox.com/v1/groups/1",proxies=proxies)
j = r.json()
print(j)
I figured it out, the ip adress didn't have access to the proxies.
it pretty simple, i would create a session:
session = requests.Session()
then a proxies dict:
proxies = {
'http': 'http://5.252.161.48:8080',
'https': 'http://5.252.161.48:8080'
}
and inject the proxies in the session
session.proxies.update(proxies)

Proxies attribute in Requests module is ignored

I'm building a small script to test the certain proxies against the API.
It seems that the actual request isn't trigger under the provided proxy. For example, the following request will be valid and I will get an response from the API.
import requests
r = requests.post("https://someapi.com", data=request_data,
proxies={"http": "http://999.999.999.999:1212"}, timeout=5)
print(r.text)
How come I get the response even if the proxy provided was invalid?
You can define the proxies like this;
import requests
pxy = "http://999.999.999.999:1212"
proxyDict = {
'http': pxy,
'https': pxy,
'ftp': pxy,
'SOCKS4': pxy
}
r = requests.post("https://someapi.com", data=request_data,
proxies=proxyDict, timeout=5)
print(r.text)

How to stay connected in website using requests

I want to connect to a website with Proxy and stay connected there, for let's say 10 seconds.
My script:
import requests
url = 'http://WEBSITE.com/'
proxies ={'http': 'http://IP:PORT'}
s = requests.Session();
s.proxies.update(proxies)
s.get(url);
As much as I learnt, I came up with this script which connects to the website but I think it does not stay connected, what should I do so this script connects to the website with proxy and stays connected?
The Session object doesn't necessarily keep the connection alive. To that end this might work:
import requests
url = 'http://WEBSITE.com/'
proxies = {'http': 'http://IP:PORT'}
headers = {
"connection" : "keep-alive",
"keep-alive" : "timeout=10, max=1000"
}
s = requests.Session();
s.proxies.update(proxies)
s.get(url, headers=headers);
See connection, and keep-alive headers :)
edit: after reviewing the requests documentation, I learned that the Session object can also be used to store headers. Here is a slightly better answer:
import requests
url = 'http://WEBSITE.com/'
proxies = {'http': 'http://IP:PORT'}
headers = {
"connection" : "keep-alive",
"keep-alive" : "timeout=10, max=1000"
}
s = requests.Session()
s.proxies.update(proxies)
s.headers.update(headers)
s.get(url)

Categories

Resources