Python auth returns 401 - python

I have a python script that downloads a snapshot from a camera. I use auth to login to the camera. For older style cameras it works with no issues, with the new one it doesn't. I have tested the link and credentials by copying them from my python script to make sure they work and they do, but i still can't login and I am not sure why. The commented url is the one that works. The uniview one doesn't. I have replaced the password with the correct one and i also tested the link in Chromium and it works.
import requests
#hikvision old cameras
#url = 'http://192.168.100.110/ISAPI/Streaming/channels/101/picture'
#uniview
url = 'http://192.168.100.108:85/images/snapshot.jpg'
r = requests.get(url, auth=('admin','password'))
if r.status_code == 200:
with open('/home/pi/Desktop/image.jpg', 'wb') as out:
for bits in r.iter_content():
out.write(bits)
else:
print(r.status_code)
print(r.content)
Below is the response I get
b'{\r\n"Response": {\r\n\t"ResponseURL": "/images/snapshot.jpg",\r\n\t"ResponseCode": 3,\r\n \t"SubResponseCode": 0,\r\n \t"ResponseString": "Not Authorized",\r\n\t"StatusCode": 401,\r\n\t"StatusString": "Unauthorized",\r\n\t"Data": "null"\r\n}\r\n}\r\n'

So it looks like hikvisio are using Basic_access_authentication while uniview are using Digest_access_authentication so according to the docs you need to change your request to:
from requests.auth import HTTPDigestAuth
r = requests.get(url, auth=HTTPDigestAuth('admin','password'))

Related

Why I'm getting different responses when i use urllib.request.urlopen and requests.get

Why I'm getting different responses when i use urllib.request.urlopen and requests.get
import requests
r = requests.get('https://upload.wikimedia.org/wikipedia/commons/1/14/Sunset_Boulevard_%281950_poster%29.jpg')
r.status_code
response 403
from urllib.request import urlopen
r = urlopen('https://upload.wikimedia.org/wikipedia/commons/1/14/Sunset_Boulevard_%281950_poster%29.jpg')
r.getcode()
response 200
First you could check print( r.content ) to see what you get from server.
Usually you can get some explanation which can help to see problem.
For your code it shows problem with header User-Agent
Wikipedia: User-Agent policy
Some servers check header User-Agent to send different content for different systems/browsers/devices. They use it also to detect scripts/bots/spamers/hackers and block them.
If I use header from real browser (or at least short Mozilla/5.0) then it works.
import requests
headers = {'User-Agent': 'Mozilla/5.0'}
url = 'https://upload.wikimedia.org/wikipedia/commons/1/14/Sunset_Boulevard_(1950_poster).jpg'
#url = 'https://upload.wikimedia.org/wikipedia/commons/1/14/Sunset_Boulevard_%281950_poster%29.jpg'
r = requests.get(url, headers=headers)
print(r.status_code)
print(r.content[:100])
with open('image.jpg', 'wb') as fh:
fh.write(r.content)
EDIT:
After running code few times it start working for me even without User-Agent. Maybe they checked it for some different reason.

Python request resulting in blank response

I'm relatively new to Python so would like some help, I've created a script which simply use the request library and basic auth to connect to an API and returns the xml or Json result.
# Imports
import requests
from requests.auth import HTTPBasicAuth
# Set variables
url = "api"
apiuser = 'test'
apipass = 'testpass'
# CALL API
r = requests.get(url, auth=HTTPBasicAuth(apiuser, apipass))
# Print Statuscode
print(r.status_code)
# Print XML
xmlString = str(r.text)
print(xmlString)
if but it returns a blank string.
If I was to use a browser to call the api and enter the cretentials I get the following response.
<Response>
<status>SUCCESS</status>
<callId>99999903219032190321</callId>
<result xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Dummy">
<authorFullName>jack jones</authorFullName>
<authorOrderNumber>1</authorOrderNumber>
</result>
</Response>
Can anyone tell me where I'm going wrong.
What API are you connecting to?
Try adding a user-agent to the header:
r = requests.get(url, auth=HTTPBasicAuth(apiuser, apipass), headers={'User-Agent':'test'})
Although this is not an exact answer for the OP, it may solve the issue for someone having a blank response from python-requests.
I was getting a blank response because of the wrong content type. I was expecting an HTML rather than a JSON or a login success. The correct content-type for me was application/x-www-form-urlencoded.
Essentially I had to do the following to make my script work.
data = 'arcDate=2021/01/05'
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
}
r = requests.post('https://www.deccanherald.com/getarchive', data=data, headers=headers)
print(r.status_code)
print(r.text)
Learn more about this in application/x-www-form-urlencoded or multipart/form-data?
Run this and see what responses you get.
import requests
url = "https://google.com"
r = requests.get(url)
print(r.status_code)
print(r.json)
print(r.text)
When you start having to pass things in your GET, PUT, DELETE, OR POST requests, you will add it in the request.
url = "https://google.com"
headers = {'api key': 'blah92382377432432')
r = requests.get(url, headers=headers)
Then you should see the same type of responses. Long story short,
Print(r.text) to see the response, then you once you see the format of the response you get, you can move it around however you want.
I have an empty response only when the authentication failed or is denied.
The HTTP status is still ≤ 400.
However, in the header you can find :
'X-Seraph-LoginReason': 'AUTHENTICATED_FAILED'
or
'X-Seraph-LoginReason': 'AUTHENTICATED_DENIED'
If the request is empty, not even a status code I could suggest waiting some time between printing. Maybe the server is taking time to return the response to you.
import time
time.sleep(5)
Not the nicest thing, but it's worth trying
How can I make a time delay in Python?
I guess there are no errors during execution
EDIT: nvm, you mentioned that you got a status code, I thought you were literally geting nothing.
On the side, if you are using python3 you have to use Print(), it replaced Print

Python - Requests HTTP range not working

According to this answer I can use the Range header to download only a part of an html page, but with this code:
import requests
url = "http://stackoverflow.com"
headers = {"Range": "bytes=0-100"} # first 100 bytes
r = requests.get(url, headers=headers)
print r.text
I get the whole html page. Why isn't it working?
If the webserver does not support Range header, it will be ignored.
Try with other server that support the header, for example tools.ietf.org:
import requests
url = "http://tools.ietf.org/rfc/rfc2822.txt"
headers = {"Range": "bytes=0-100"}
r = requests.get(url, headers=headers)
assert len(r.text) <= 101 # not exactly 101, because r.text does not include header
I'm having the same problem. The server I'm downloading from supports the Range header. Using requests, the header is ignored and the entire file is downloaded with a 200 status code. Meanwhile, sending the request via urllib3 correctly returns the partial content with a 206 status code.
I suppose this must be some kind of bug or incompatibility. requests works fine with other files, including the one in the example below. Accessing my file requires basic authorization - perhaps that has something to do with it?
If you run into this, urllib3 may be worth trying. You'll already have it because requests uses it. This is how I worked around my problem:
import urllib3
url = "https://www.rfc-editor.org/rfc/rfc2822.txt"
http = urllib3.PoolManager()
response = http.request('GET', url, headers={'Range':'bytes=0-100'})
Update: I tried sending a Range header to https://stackoverflow.com/, which is the link you specified. This returns the entire content with both Python modules as well as curl, despite the response header specifying accept-ranges: bytes. I can't say why.
I tried it without using:
headers = {"Range": "bytes=0-100"}
Try to use this:
import requests
# you can change the url
url = requests.get('http://example.com/')
print(url.text)

Python library requests cannot open a site

url = 'http://developer.usa.gov/1usagov.json'
r = requests.get(url)
Python code hangs forever and i not behind a http proxy or anything.
Pointing my browser directly to the url works
Following my comment above.. I think your problem is the continuous stream. You need to do something like in the doc
r = requests.get(url, stream=True)
if int(r.headers['content-length']) < TOO_LONG:
# rebuild the content and parse
with a while instead of if if you want a continuous loop.

HTTP Error 401: Authorization Required while downloading a file from HTTPS website and saving it

Basically i need a program that given a URL, it downloads a file and saves it. I know this should be easy but there are a couple of drawbacks here...
First, it is part of a tool I'm building at work, I have everything else besides that and the URL is HTTPS, the URL is of those you would paste in your browser and you'd get a pop up saying if you want to open or save the file (.txt).
Second, I'm a beginner at this, so if there's info I'm not providing please ask me. :)
I'm using Python 3.3 by the way.
I tried this:
import urllib.request
response = urllib.request.urlopen('https://websitewithfile.com')
txt = response.read()
print(txt)
And I get:
urllib.error.HTTPError: HTTP Error 401: Authorization Required
Any ideas? Thanks!!
You can do this easily with the requests library.
import requests
response = requests.get('https://websitewithfile.com/text.txt',verify=False, auth=('user', 'pass'))
print(response.text)
to save the file you would type
with open('filename.txt','w') as fout:
fout.write(response.text):
(I would suggest you always set verify=True in the resquests.get() command)
Here is the documentation:
Doesn't the browser also ask you to sign in? Then you need to repeat the request with the added authentication like this:
Python urllib2, basic HTTP authentication, and tr.im
Equally good: Python, HTTPS GET with basic authentication
If you don't have Requests module, then the code below works for python 2.6 or later. Not sure about 3.x
import urllib
testfile = urllib.URLopener()
testfile.retrieve("https://randomsite.com/file.gz", "/local/path/to/download/file")
You can try this solution: https://github.qualcomm.com/graphics-infra/urllib-siteminder
import siteminder
import getpass
url = 'https://XYZ.dns.com'
r = siteminder.urlopen(url, getpass.getuser(), getpass.getpass(), "dns.com")
Password:<Enter Your Password>
data = r.read() / pd.read_html(r.read()) # need to import panda as pd for the second one

Categories

Resources