401 response with Python requests - python

I'am using library "requests" for get image from url ('http://any_login:any_password#10.10.9.2/ISAPI/Streaming/channels/101/picture?snapShotImageType=JPEG'), but response error with code 401. Thats url from my rtsp camera.
I try using 'HTTPBasicAuth', 'HTTPDigestAuth' and 'HTTPProxyAuth'. But it's not working.
import requests
from requests.auth import HTTPBasicAuth
url = "http://any_login:any_password#10.10.9.2/ISAPI/Streaming/channels/101/picture?snapShotImageType=JPEG"
response = requests.get(url, auth=requests.auth.HTTPBasicAuth("any_login", "any_password"))
if response.status_code == 200:
with open("sample.jpg", 'wb') as f:
f.write(response.content)
I expected the output of image file from rtsp flow, but I got error code 401.

Given your username I suspect your password may contain non-ASCII characters. I had a similar issue with a password containing diacritics.
This worked :
curl -u user:pwd --basic https://example.org
This (and variations) throwed 401 Unauthorized :
import requests
requests.get("https://example.org", auth=requests.auth.HTTPBasicAuth("user","pwd"))
Changing the password to ASCII only characters solved the issue.

Related

can't download image with python

try to download images with python
but only this picture can't download it
i don't know the reason cause when i run it, it just stop just nothing happen
no image , no error code ...
here's the code plz tell me the reason and solution plz..
import urllib.request
num=404
def down(URL):
fullname=str(num)+"jpg"
urllib.request.urlretrieve(URL,fullname)
im="https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg"
down(im)
this code will work for you try to change the url that you use and see result :
import requests
pic_url = "https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg"
cookies = dict(BCPermissionLevel='PERSONAL')
with open('aa.jpg', 'wb') as handle:
response = requests.get(pic_url, headers={"User-Agent": "Mozilla/5.0"}, cookies=cookies,stream=True)
if not response.ok:
print (response)
for block in response.iter_content(1024):
if not block:
break
handle.write(block)
What #MoetazBrayek says in their comment (but not answer) is correct: the website you're querying is blocking the request.
It's common for sites to block requests based on user-agent or referer: if you try to curl https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg you will get an HTTP error (403 Access Denied):
❯ curl -I https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg
HTTP/2 403
Apparently The Sun wants a browser's user-agent, and specifically the string "mozilla" is enough to get through:
❯ curl -I -A mozilla https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg
HTTP/2 200
You will have to either switch to the requests package or replace your url string with a proper urllib.request.Request object so you can customise more pieces of the request. And apparently urlretrieve does not support Request objects so you will also have to use urlopen:
req = urllib.request.Request(URL, headers={'User-Agent': 'mozilla'})
res = urllib.request.urlopen(req)
assert res.status == 200
with open(filename, 'wb') as out:
shutil.copyfileobj(res, out)

Getting 403 Forbidden error when requesting JSON output with apikey

I am trying to request information from a server with Python. The API key is correct however I still get the 403 error. It works with curl, but not with Python.
Here is the curl code that outputs JSON:
curl -H "apiKey: xxx" https://kretaglobalmobileapi.ekreta.hu/api/v1/Institute/3928
And here is my code that outputs Forbidden error:
from urllib.request import Request, urlopen
import json
ker = Request('https://kretaglobalmobileapi.ekreta.hu/api/v1/Institute/3928')
ker.add_header('apiKey', 'xxxx')
content = json.loads(urlopen(ker))
print(content)
What is the problem?
urlopen usually returns a HTTPResponse object. So in order to read the contents use the read() function. Other wise your code looks fine.
req = Request('https://kretaglobalmobileapi.ekreta.hu/api/v1/Institute/3928')
req.add_header('apikey', 'xxx')
content = urlopen(req).read()
print(content).
You can also use another library for instance requests if the above method dosent work,
r = requests.get('<MY_URI>', headers={'apikey': 'xxx'})

Python - Requests HTTP range not working

According to this answer I can use the Range header to download only a part of an html page, but with this code:
import requests
url = "http://stackoverflow.com"
headers = {"Range": "bytes=0-100"} # first 100 bytes
r = requests.get(url, headers=headers)
print r.text
I get the whole html page. Why isn't it working?
If the webserver does not support Range header, it will be ignored.
Try with other server that support the header, for example tools.ietf.org:
import requests
url = "http://tools.ietf.org/rfc/rfc2822.txt"
headers = {"Range": "bytes=0-100"}
r = requests.get(url, headers=headers)
assert len(r.text) <= 101 # not exactly 101, because r.text does not include header
I'm having the same problem. The server I'm downloading from supports the Range header. Using requests, the header is ignored and the entire file is downloaded with a 200 status code. Meanwhile, sending the request via urllib3 correctly returns the partial content with a 206 status code.
I suppose this must be some kind of bug or incompatibility. requests works fine with other files, including the one in the example below. Accessing my file requires basic authorization - perhaps that has something to do with it?
If you run into this, urllib3 may be worth trying. You'll already have it because requests uses it. This is how I worked around my problem:
import urllib3
url = "https://www.rfc-editor.org/rfc/rfc2822.txt"
http = urllib3.PoolManager()
response = http.request('GET', url, headers={'Range':'bytes=0-100'})
Update: I tried sending a Range header to https://stackoverflow.com/, which is the link you specified. This returns the entire content with both Python modules as well as curl, despite the response header specifying accept-ranges: bytes. I can't say why.
I tried it without using:
headers = {"Range": "bytes=0-100"}
Try to use this:
import requests
# you can change the url
url = requests.get('http://example.com/')
print(url.text)

HTTP Error 401: Authorization Required while downloading a file from HTTPS website and saving it

Basically i need a program that given a URL, it downloads a file and saves it. I know this should be easy but there are a couple of drawbacks here...
First, it is part of a tool I'm building at work, I have everything else besides that and the URL is HTTPS, the URL is of those you would paste in your browser and you'd get a pop up saying if you want to open or save the file (.txt).
Second, I'm a beginner at this, so if there's info I'm not providing please ask me. :)
I'm using Python 3.3 by the way.
I tried this:
import urllib.request
response = urllib.request.urlopen('https://websitewithfile.com')
txt = response.read()
print(txt)
And I get:
urllib.error.HTTPError: HTTP Error 401: Authorization Required
Any ideas? Thanks!!
You can do this easily with the requests library.
import requests
response = requests.get('https://websitewithfile.com/text.txt',verify=False, auth=('user', 'pass'))
print(response.text)
to save the file you would type
with open('filename.txt','w') as fout:
fout.write(response.text):
(I would suggest you always set verify=True in the resquests.get() command)
Here is the documentation:
Doesn't the browser also ask you to sign in? Then you need to repeat the request with the added authentication like this:
Python urllib2, basic HTTP authentication, and tr.im
Equally good: Python, HTTPS GET with basic authentication
If you don't have Requests module, then the code below works for python 2.6 or later. Not sure about 3.x
import urllib
testfile = urllib.URLopener()
testfile.retrieve("https://randomsite.com/file.gz", "/local/path/to/download/file")
You can try this solution: https://github.qualcomm.com/graphics-infra/urllib-siteminder
import siteminder
import getpass
url = 'https://XYZ.dns.com'
r = siteminder.urlopen(url, getpass.getuser(), getpass.getpass(), "dns.com")
Password:<Enter Your Password>
data = r.read() / pd.read_html(r.read()) # need to import panda as pd for the second one

Python Requests Invalid URL Label error

I'm trying to access Shopify's API which uses a URL format of -
https://apikey:password#hostname/admin/resource.xml
e.g.http://7ea7a2ff231f9f7:95c5e8091839609c864#iliketurtles.myshopify.com/admin/orders.xml
doing $curl api_url downloads the correct XML however when I do
import requests
api_url = 'http://7ea7a2ff231f9f7d:95c5e8091839609c864#iliketurtles.myshopify.com/admin/orders.xml'
r = requests.get(api_url) # Invalid url label error
Any idea why I'm getting this? Curl / opening the link directly in the browser is working fine. Is it because the length of the URL is too long?
Thanks!
The error ('URL has an invalid label.') is probably a bug in requests library: it applies idna encoding (for internationalized domain names) on hostname with userinfo attached, source:
netloc = netloc.encode('idna').decode('utf-8')
that might raise 'label empty or too long' error for the long username:password. You can try to report it on the requests' issue tracker.
a:b#example.com form is deprecated otherwise
requests.get('https://a:b#example.com') should be equivalent to requests.get('https://example.com', auth=('a', 'b')) if all characters in username:password are from [-A-Za-z0-9._~!$&'()*+,;=] set.
curl and requests also differ then there are percent-encoded characters in userinfo e.g., https://a:%C3%80#example.com leads to curl generating the following http header:
Authorization: Basic YTrDgA==
but requests produces:
Authorization: Basic YTolQzMlODA=
i.e.:
>>> import base64
>>> base64.b64decode('YTrDgA==')
'a:\xc3\x80'
>>> print _
a:À
>>> base64.b64decode('YTolQzMlODA=')
'a:%C3%80'
It's not the length of the URL. If I do:
import requests
test_url = 'http://www.google.com/?somereallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallyreallylongurl=true'
r = requests.get(test_url)
returns <Response [200]>
Have you tried making the request with the requests Authentication parameters detailed here
>>> requests.get('http://iliketurtles.myshopify.com/admin/orders.xml', auth=('ea7a2ff231f9f7', '95c5e8091839609c864'))
<Response [403]>

Categories

Resources