Python request with authenticated proxy without exposing username and password. - python

I am using request library to connect to an api and get data. I have the proxy to connect and it works with AuthenticatedProxy
This is how my code looks.
import requests
s = requests.Session()
url = "https://testapi"
urlkey = “testkey”
proxies = {'https': 'http://<UserName>:<Password>#proxy_url:8080'}
resp = s.get(url, params={'key':urlkey }, proxies = proxies)
content = resp.content
print content
Username and Password is exposed here and I want to avoid that. How can I acheived that ? Can i ask request to use defaultCredentials or 'account that is running the python script' credentials ?
In .net, following config works:
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="true">
<proxy proxyaddress="https://testapi:8080" bypassonlocal="True"/>
</defaultProxy>
</system.net>
Thanks all.
Similar question here.

From the request docs:
You can also configure proxies by setting the environment variables
HTTP_PROXY and HTTPS_PROXY.
So you could avoid revealing the credentials in the script by setting the environment variable instead:
$ export HTTPS_PROXY="http://<UserName>:<Password>#proxy_url:8080"
And then in the script you would use the proxy by just calling:
resp = s.get(url, params={'key':urlkey })

Related

How to i make domain authorization in Thycotic

I need some help on authorizing in Thycotic using current domain authorization via its API (no need to enter domain username and pass).
So the Thycotic API has this possibility, but i dont understand how to use it. In it manual i see some example, but using PS
$api = "https://<Secret Server URL>/winauthwebservices/api/v1"
$endpoint = "$api/secrets/8387"
$secret = Invoke-RestMethod $endpoint -UseDefaultCredentials
In my python script i'm trying with sspi
from requests_negotiate_sspi import HttpNegotiateAuth
site = 'https://<Secret Server URL>/SecretServer'
windows_auth_API = '/winauthwebservices/api/v1'
headers = {'Accept':'application/json', 'content-type':'application/x-www-form-urlencoded'}
how do i make the further login here? resp gives me 401: Unauthorized
resp = requests.post(site+windows_auth_API, headers=headers, auth=HttpNegotiateAuth)
the solution
from requests_negotiate_sspi import HttpNegotiateAuth
import requests
site = 'https://<Secret Server URL>/SecretServer'
windows_auth_API = '/winauthwebservices/api/v1'
resp = requests.get(site+windows_auth_API+"/secrets/8387", auth=HttpNegotiateAuth())

Python Requests, Work Proxy and Django REST

appreciate your help here, thanks in advance.
My Problem:
I am using Python's Requests module for get/post requests to a Django REST API behind a work proxy. I am unable to get past the proxy and I encounter an error. I have summarised this below:
Using the following code (what I've tried):
s = requests.Session()
s.headers = {
"User-Agent": [someGenericUserAgent]
}
s.trust_env = False
proxies = {
'http': 'http://[domain]\[userName]:[password]#[proxy]:8080',
'https': 'https://[domain]\[userName]:[password]#[proxy]:8080'
}
os.environ['NO_PROXY'] = [APIaddress]
os.environ['no_proxy'] = [APIaddress]
r = s.post(url=[APIaddress], proxies=proxies)
With this I get an error:
... OSError('Tunnel connection failed: 407 Proxy Authentication Required')))
Additional Context:
This is on a windows 10 machine.
Work uses a "automatic proxy setup" script (.pac), looking at the script there are a number of proxies that will be automatically assigned depending on the IP address of the machine. All of these proxies I have tried under [proxy] above, with the same error.
The above works when I am not running through the work network, and I don't use the additional proxy settings (removing proxies=proxies). i.e on my home network.
I have no issues with a get request via my browser via the proxy to the Django REST API view.
Things I am uncertain about:
I don't know if I am using the right [proxy]. Is there a way to verify this? I have tried using [findMyProxy].com sites, using the ip addresses it still doesn't work.
I don't know if I am using [domain]\[userName] correctly. is a \ correct? my work does use a domain.
I'm certain it is not a requests issue, as trying to do pip install --proxy http://[domain]\[userName]:[password]#[proxy]:8080 someModule bares the same 407 error.
Any help appreciated.
How I came to the solution:
I used curl to establish a <200 response>, after a lot of trial and error, success was:
$ curl -U DOMAIN\USER:PW -v -x http://LOCATION_OF_PAC_FILE --proxy-ntlm www.google.com
Where -U is the domain, user name and password.
-V is verbose, made it easier for debugging.
-x is the proxy, in my case the location to the .pac file. Curl automatically determines the proxy IP from the PAC. Requests does not do this by default (that I know of).
I used curl to determine that my proxy was using ntlm.
www.google.com as an external site to test the proxy auth.
NOTE: only 1 off \ between domain and username.
Trying to make request use ntlm, I found was impossible by default and instead used requests-ntlm2.
The PAC file through request-ntlm2 did not work so I used pypac to autodiscover the PAC file then determined the proxy based on the URL.
The working code is as follows:
from pypac import PACSession
from requests_ntlm2 import (
HttpNtlmAuth,
HttpNtlmAdapter,
NtlmCompatibility
)
username = 'DOMAIN\\USERNAME'
password = 'PW'
# Don't need the following thanks to pypacs
# proxy_ip = 'PROXY_IP'
# proxy_port = "PORT"
# proxies = {
# 'http': 'http://{}:{}'.format(proxy_ip, proxy_port),
# 'https': 'http://{}:{}'.format(proxy_ip, proxy_port)
# }
ntlm_compatibility = NtlmCompatibility.NTLMv2_DEFAULT
# session = requests.Session() <- replaced with PACSession()
session = PACSession()
session.mount(
'https://',
HttpNtlmAdapter(
username,
password,
ntlm_compatibility=ntlm_compatibility
)
)
session.mount(
'http://',
HttpNtlmAdapter(
username,
password,
ntlm_compatibility=ntlm_compatibility
)
)
session.auth = HttpNtlmAuth(
username,
password,
ntlm_compatibility=ntlm_compatibility
)
# Don't need the following thanks to pypacs
# session.proxies = proxies
response = session.get('http://www.google.com')

How to authenticate internal corporate proxy with credentials in order to reach external API

We have a requirement in consuming an external API, in order to reach to their endpoint, we would need to authenticate our proxy first.
How can we achieve this using python, seems like there is one in
c# ---> CredentialCache.DefaultCredentials;
How to do it in python,
so far I have tried:
import requests
proxies = {"https":"https://url:port/file"}
client_cert = ("key/path", "cert/path")
data = """xml request"""
requests.post(url, proxy=proxy, data=data, cert=client_cert)
I have read in the docs saying there is http digest authentication like
I can use https://username:password#url:port/file .
Any suggestions?
ERROR:
HTTPSConnectionPool, failed to establish connection
Actually, My question has answer:
proxy = {"http": "http://username:password#proxy:port", "https":"http://username:password#proxy:port"}
requests.post(url, headers, auth, cert, payload, proxies=proxy) #===> works
or else we can set the environment variable.
export https_proxy = "http://username:password#proxy:port"
export http_proxy = "http://username:password#proxy:port"
In my case there were multiple proxies for our company and I was using the incorrect proxy details. When I tried with an accurate one. It worked.
Thanks to Stack

Download a file from https with authentication

I have a Python 2.6 script that downloades a file from a web server. I want this this script to pass a username and password(for authenrication before fetching the file) and I am passing them as part of the url as follows:
import urllib2
response = urllib2.urlopen("http://'user1':'password'#server_name/file")
However, I am getting syntax error in this case. Is this the correct way to go about it? I am pretty new to Python and coding in general.
Can anybody help me out?
Thanks!
If you can use the requests library, it's insanely easy. I'd highly recommend using it if possible:
import requests
url = 'http://somewebsite.org'
user, password = 'bob', 'I love cats'
resp = requests.get(url, auth=(user, password))
I suppose you are trying to pass through a Basic Authentication. In this case, you can handle it this way:
import urllib2
username = 'user1'
password = '123456'
#This should be the base url you wanted to access.
baseurl = 'http://server_name.com'
#Create a password manager
manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
manager.add_password(None, baseurl, username, password)
#Create an authentication handler using the password manager
auth = urllib2.HTTPBasicAuthHandler(manager)
#Create an opener that will replace the default urlopen method on further calls
opener = urllib2.build_opener(auth)
urllib2.install_opener(opener)
#Here you should access the full url you wanted to open
response = urllib2.urlopen(baseurl + "/file")
Use requests library and just put the credentials inside your .netrc file.
The library will load them from there and you will be able to commit the code to your SCM of choice without any security worries.

Using urllib2 via proxy

I am trying to use urllib2 through a proxy; however, after trying just about every variation of passing my verification details using urllib2, I either get a request that hangs forever and returns nothing or I get 407 Errors. I can connect to the web fine using my browser which connects to a prox-pac and redirects accordingly; however, I can't seem to do anything via the command line curl, wget, urllib2 etc. even if I use the proxies that the prox-pac redirects to. I tried setting my proxy to all of the proxies from the pac-file using urllib2, none of which work.
My current script looks like this:
import urllib2 as url
proxy = url.ProxyHandler({'http': 'username:password#my.proxy:8080'})
auth = url.HTTPBasicAuthHandler()
opener = url.build_opener(proxy, auth, url.HTTPHandler)
url.install_opener(opener)
url.urlopen("http://www.google.com/")
which throws HTTP Error 407: Proxy Authentication Required and I also tried:
import urllib2 as url
handlePass = url.HTTPPasswordMgrWithDefaultRealm()
handlePass.add_password(None, "http://my.proxy:8080", "username", "password")
auth_handler = url.HTTPBasicAuthHandler(handlePass)
opener = url.build_opener(auth_handler)
url.install_opener(opener)
url.urlopen("http://www.google.com")
which hangs like curl or wget timing out.
What do I need to do to diagnose the problem? How is it possible that I can connect via my browser but not from the command line on the same computer using what would appear to be the same proxy and credentials?
Might it be something to do with the router? if so, how can it distinguish between browser HTTP requests and command line HTTP requests?
Frustrations like this are what drove me to use Requests. If you're doing significant amounts of work with urllib2, you really ought to check it out. For example, to do what you wish to do using Requests, you could write:
import requests
from requests.auth import HTTPProxyAuth
proxy = {'http': 'http://my.proxy:8080'}
auth = HTTPProxyAuth('username', 'password')
r = requests.get('http://wwww.google.com/', proxies=proxy, auth=auth)
print r.text
Or you could wrap it in a Session object and every request will automatically use the proxy information (plus it will store & handle cookies automatically!):
s = requests.Session(proxies=proxy, auth=auth)
r = s.get('http://www.google.com/')
print r.text

Categories

Resources