In a Python script I use the "Requests" library with HTTP basic authentication and a custom CA certificate to trust like this:
import requests
response = requests.get(base_url, auth=(username, password), verify=ssl_ca_file)
All requests I need to make have to use these parameters. Is there a "Python" way to set these as default for all requests?
Use Session(). Documentation states:
The Session object allows you to persist certain parameters across
requests.
import requests
s = requests.Session()
s.auth = (username, password)
s.verify = ssl_ca_file
s.get(base_url)
Related
I need to talk to a SOAP server which requires "preemptive authentication" (it uses BasicAuth).
I have no idea of how to configure my zeep client to make it behave accordingly.
As it says here, the SoapUI tool can be configured to use "preemptive authentication"
Can anyone please help me achieve the same? (either configuring zeep or requests)
Here is my code, which is pretty standard:
session = Session()
session.verify = False # ignore certificate
session.auth = HTTPBasicAuth(user, pwd)
transport = Transport(session=session)
client = Client(wsdl, transport=transport)
# ...
response = client.service.Operation(**params)
The above fails authenticating and ends up with an SSL error, which is expected.
Any help is much appreciated. Thank you
In theory, you should be able to do this by creating a session and modifying the headers directly. This way, the header will be sent with the original request instead of using the auth behavior of waiting for a challenge.
import requests
session = requests.Session()
session.headers['Authorization'] = 'Basic ' + <your 64-bit encoded user:pass>
transport = zeep.Transport(session=session)
client = zeep.Client(wsdl=soapURI,transport=transport)
I'm able to authenticate and connect to AWSQueryConnection using Boto3, but whenever I try to get information about a URL using the 'UrlInfo' method, I receive a 204 response with no data.
import boto
from boto.connection import AWSQueryConnection
conn = AWSQueryConnection(aws_access_key_id='', aws_secret_access_key='', host='awis.amazonaws.com')
response = conn.make_request('UrlInfo', params={
'Url' : 'http://reddit.com',
'ResponseGroup': 'LinksInCount'
})
print(response.status)
Is there anything wrong with the way I'm using this module?
I was working on a similar thing lately, here is the code I was able to get working using aws-requests-auth, it has built-in support for boto3:
(Notice: host, region and quote method safe parameter)
import requests
from aws_requests_auth.boto_utils import BotoAWSRequestsAuth
auth = BotoAWSRequestsAuth(
aws_host='awis.us-west-1.amazonaws.com',
aws_region='us-west-1',
aws_service='awis'
)
url = 'https://awis.us-west-1.amazonaws.com/api'
query_params = quote(
'Action=UrlInfo&ResponseGroup=LinksInCount&Url=google.com',
safe = '/-_.~=&'
)
response = requests.get(url + '?' + query_params, auth=auth)
print(response.content)
If you prefer to do it without any 3rd party library, you could always do:
from boto3.session import Session
aws_credentials = Session().get_credentials()
print(aws_credentials.access_key)
print(aws_credentials.secret_key)
Then go with the full fun signature process as described at the AWIS Documentation - Calculating Signatures.
We have a requirement in consuming an external API, in order to reach to their endpoint, we would need to authenticate our proxy first.
How can we achieve this using python, seems like there is one in
c# ---> CredentialCache.DefaultCredentials;
How to do it in python,
so far I have tried:
import requests
proxies = {"https":"https://url:port/file"}
client_cert = ("key/path", "cert/path")
data = """xml request"""
requests.post(url, proxy=proxy, data=data, cert=client_cert)
I have read in the docs saying there is http digest authentication like
I can use https://username:password#url:port/file .
Any suggestions?
ERROR:
HTTPSConnectionPool, failed to establish connection
Actually, My question has answer:
proxy = {"http": "http://username:password#proxy:port", "https":"http://username:password#proxy:port"}
requests.post(url, headers, auth, cert, payload, proxies=proxy) #===> works
or else we can set the environment variable.
export https_proxy = "http://username:password#proxy:port"
export http_proxy = "http://username:password#proxy:port"
In my case there were multiple proxies for our company and I was using the incorrect proxy details. When I tried with an accurate one. It worked.
Thanks to Stack
How can I use automatic NTLM authentication from python on Windows?
I want to be able to access the TFS REST API from windows without hardcoding my password, the same as I do from the web browser (firefox's network.automatic-ntlm-auth.trusted-uris, for example).
I found this answer which works great for me because:
I'm only going to run it from Windows, so portability isn't a problem
The response is a simple json document, so no need to store an open session
It's using the WinHTTP.WinHTTPRequest.5.1 COM object to handle authentication natively:
import win32com.client
URL = 'http://bigcorp/tfs/page.aspx'
COM_OBJ = win32com.client.Dispatch('WinHTTP.WinHTTPRequest.5.1')
COM_OBJ.SetAutoLogonPolicy(0)
COM_OBJ.Open('GET', URL, False)
COM_OBJ.Send()
print(COM_OBJ.ResponseText)
You can do that with https://github.com/requests/requests-kerberos. Under the hood it's using https://github.com/mongodb-labs/winkerberos. The latter is marked as Beta, I'm not sure how stable it is. But I have requests-kerberos in use for a while without any issue.
Maybe a more stable solution would be https://github.com/brandond/requests-negotiate-sspi, which is using pywin32's SSPI implementation.
I found solution here https://github.com/mullender/python-ntlm/issues/21
pip install requests
pip install requests_negotiate_sspi
import requests
from requests_negotiate_sspi import HttpNegotiateAuth
GetUrl = "http://servername/api/controller/Methodname" # Here you need to set your get Web api url
response = requests.get(GetUrl, auth=HttpNegotiateAuth())
print("Get Request Outpot:")
print("--------------------")
print(response.content)
for request by https:
import requests
from requests_negotiate_sspi import HttpNegotiateAuth
import urllib3
urllib3.disable_warnings()
GetUrl = "https://servername/api/controller/Methodname" # Here you need to set your get Web api url
response = requests.get(GetUrl, auth=HttpNegotiateAuth(), verify=False)
print("Get Request Outpot:")
print("--------------------")
print(response.content)
NTLM credentials are based on data obtained during the interactive logon process, and include a one-way hash of the password. You have to provide the credential.
Python has requests_ntlm library that allows for HTTP NTLM authentication.
You can reference this article to access the TFS REST API :
Python Script to Access Team Foundation Server (TFS) Rest API
If you are using TFS 2017 or VSTS, you can try to use Personal Access Token in a Basic Auth HTTP Header along with your REST request.
I am trying to use urllib2 through a proxy; however, after trying just about every variation of passing my verification details using urllib2, I either get a request that hangs forever and returns nothing or I get 407 Errors. I can connect to the web fine using my browser which connects to a prox-pac and redirects accordingly; however, I can't seem to do anything via the command line curl, wget, urllib2 etc. even if I use the proxies that the prox-pac redirects to. I tried setting my proxy to all of the proxies from the pac-file using urllib2, none of which work.
My current script looks like this:
import urllib2 as url
proxy = url.ProxyHandler({'http': 'username:password#my.proxy:8080'})
auth = url.HTTPBasicAuthHandler()
opener = url.build_opener(proxy, auth, url.HTTPHandler)
url.install_opener(opener)
url.urlopen("http://www.google.com/")
which throws HTTP Error 407: Proxy Authentication Required and I also tried:
import urllib2 as url
handlePass = url.HTTPPasswordMgrWithDefaultRealm()
handlePass.add_password(None, "http://my.proxy:8080", "username", "password")
auth_handler = url.HTTPBasicAuthHandler(handlePass)
opener = url.build_opener(auth_handler)
url.install_opener(opener)
url.urlopen("http://www.google.com")
which hangs like curl or wget timing out.
What do I need to do to diagnose the problem? How is it possible that I can connect via my browser but not from the command line on the same computer using what would appear to be the same proxy and credentials?
Might it be something to do with the router? if so, how can it distinguish between browser HTTP requests and command line HTTP requests?
Frustrations like this are what drove me to use Requests. If you're doing significant amounts of work with urllib2, you really ought to check it out. For example, to do what you wish to do using Requests, you could write:
import requests
from requests.auth import HTTPProxyAuth
proxy = {'http': 'http://my.proxy:8080'}
auth = HTTPProxyAuth('username', 'password')
r = requests.get('http://wwww.google.com/', proxies=proxy, auth=auth)
print r.text
Or you could wrap it in a Session object and every request will automatically use the proxy information (plus it will store & handle cookies automatically!):
s = requests.Session(proxies=proxy, auth=auth)
r = s.get('http://www.google.com/')
print r.text