I'm trying to send requests to an HTTPS website behind corporate proxy that requires authentication.
The below code works:
import requests
import urllib
requests.get('https://google.com',
proxies={
'http': 'http://myusername:mypassword#10.20.30.40:8080',
'https': 'http://myusername:mypassword#10.20.30.40:8080'
},
verify=False)
but I want to avoid hardcoding the username and password in the script file, especially that we have to reset our passwords every 60 days. I need it to automatically authenticate as the logged-in Windows user, same behavior as a browser.
I looked online and learned that this is not possible through requests (1, 2, 3, 4) and I would have to resort to something like pycurl or px but all examples online had username and password provided explicitly. There was also this solution utilizing win32com.client but I have no idea how to use this in place of requests.
Related
I have programmed an application in Python and implemented an auto update mechanism which just retrieves a text file from a cloud server and then checks the version number.
This works fine, but some subsidiaries have their proxies configured in a way that the cloud server can only be accessed through a proxy server.
Now, retrieving something from the web while using a proxy server is generally not a big deal.
I could just use something like this:
import requests
url = 'https://www.cloudserver.com/versionfile'
proxy = 'http://user:pass#proxyserver:port'
proxies = {'http': proxy, 'https': proxy}
requests.get(url, proxies=proxies)
This works wonderfully. The problem is that I don't want my customers to enter username, password and proxyserver. Ok, I could get the username with getpass.getuser(), but not the password.
Another option that sounded promising was pypac:
>>> from pypac import PACSession
>>> session = PACSession()
>>> session.get('http://example.org')
<Response [407]>
Alas, it answers with 407 - Proxy Authentication Required.
There are professional programs out there which just magically use the system proxy settings including username and password (or maybe a hashed version or a ticket of some form) and never have to ask the user about anything. It just works, e.g. Firefox seems to do it that way.
Is it possible to extract or reuse the system settings to access the web without asking the user for the credentials in Python?
I need to send requests through requests, but the privacy of my address and location is mandatory. Now the question is, how can I connect VPN and Python or provide privacy in a different way?
import requests
headers = {'user-agent':'my user agent',
'acept':'*/*'}
print(requests.get('https://httpbin.org/ip', headers=headers).text)
I want do download several data from a website using pythons requests package. I'm sitting behind a PROXY that need authentification.
My problem is now, that my password contains the character #. I cannot change the password since the machine is used by several persons.
So if I use the syntax (according to http://docs.python-requests.org/en/latest/user/advanced/)
http://user:password#host/
So requests splits the password and interprets the part behind the # as host. Is there a way to solve this? Maybe use quotes ore something like this?
As far as I know, you can manually use HTTPProxyAuth:
import requests
from requests.auth import HTTPProxyAuth
auth = HTTPProxyAuth('username', 'password')
proxy = {'http': 'http://host/'}
req = requests.get('http://www.google.com', proxies=proxy, auth=auth)
I am writing a script to automatically scrape information from my companies directory website using mechanize. However, the interpreter returns _response.httperror_seek_wrapper: HTTP Error 401: Authorization Required onbr.open(url) when I run my script.
This is the portion of my code where the interpreter runs into the error.
from sys import path
path.append("./mechanize/mechanize")
import _mechanize
from base64 import b64encode
def login (url, username, password):
b64login = b64encode('%s:%s' % (username, password))
br = _mechanize.Browser()
br.set_handle_robots(False)
br.addheaders.append(('Authorization','Basic %s' % b64login))
br.open(url)
r = br.response()
print r.read()
The site I am trying to access is an internal site within my companies network, and it uses a GlobalSign Certificate for authentication on company-issued computers.
I am sure the authentication information I am inputting is correct, and I have looked everywhere for a solution. Any hints on how to resolve this? Thanks!
It looks like your authentication methods don't match up. You state that your company uses GlobalSign certificates but your code is using Basic authentication. They are NOT equal!!
From a brief look at the Mechanize documentation (limited as it is), you don't implement authentication by manually adding headers. It has it's own add_password method for handling authentication.
Also, as a general HTTP authentication policy, you should NOT use preemptive authentication by adding the authentication headers yourself. You should set up your code with the necessary authentication (based on your library's documentation) and let it handle the authentication negotiation.
I am trying to use urllib2 through a proxy; however, after trying just about every variation of passing my verification details using urllib2, I either get a request that hangs forever and returns nothing or I get 407 Errors. I can connect to the web fine using my browser which connects to a prox-pac and redirects accordingly; however, I can't seem to do anything via the command line curl, wget, urllib2 etc. even if I use the proxies that the prox-pac redirects to. I tried setting my proxy to all of the proxies from the pac-file using urllib2, none of which work.
My current script looks like this:
import urllib2 as url
proxy = url.ProxyHandler({'http': 'username:password#my.proxy:8080'})
auth = url.HTTPBasicAuthHandler()
opener = url.build_opener(proxy, auth, url.HTTPHandler)
url.install_opener(opener)
url.urlopen("http://www.google.com/")
which throws HTTP Error 407: Proxy Authentication Required and I also tried:
import urllib2 as url
handlePass = url.HTTPPasswordMgrWithDefaultRealm()
handlePass.add_password(None, "http://my.proxy:8080", "username", "password")
auth_handler = url.HTTPBasicAuthHandler(handlePass)
opener = url.build_opener(auth_handler)
url.install_opener(opener)
url.urlopen("http://www.google.com")
which hangs like curl or wget timing out.
What do I need to do to diagnose the problem? How is it possible that I can connect via my browser but not from the command line on the same computer using what would appear to be the same proxy and credentials?
Might it be something to do with the router? if so, how can it distinguish between browser HTTP requests and command line HTTP requests?
Frustrations like this are what drove me to use Requests. If you're doing significant amounts of work with urllib2, you really ought to check it out. For example, to do what you wish to do using Requests, you could write:
import requests
from requests.auth import HTTPProxyAuth
proxy = {'http': 'http://my.proxy:8080'}
auth = HTTPProxyAuth('username', 'password')
r = requests.get('http://wwww.google.com/', proxies=proxy, auth=auth)
print r.text
Or you could wrap it in a Session object and every request will automatically use the proxy information (plus it will store & handle cookies automatically!):
s = requests.Session(proxies=proxy, auth=auth)
r = s.get('http://www.google.com/')
print r.text