I am trying to run a REST request in windows 7 but it is not executed from the python code below.
The code works in ubuntu, but doesn't windows 7:
def get_load_names(url='http://<ip>:5000/loads_list'):
response = requests.get(url)
if response.status_code == 200:
jData = json.loads(response.content)
print(jData)
else:
print('error', response)
Also if I paste the url in the browser, I see the request output. So I assume there is something to do with the firewall.
I created rules to open the port 5000 for input and output, but no luck so far.
Unless you have a very specific reason for writing your own error handling, you should use the built-in raise_for_status()
import requests
import json
response = requests.get('http://<ip>:5000/loads_list')
response.raise_for_status()
jData = json.loads(response.text)
print(jData)
This will hopefully raise an informative error message that you can deal with.
Related
Problem Description
I am using the following the following python code in order to retrieve data from a give website through an API. The problem is that I am not receiving anything. When I print(str(response.status)+" "+response.reason), I get the following: 302 FOUND and nothing is being printed. From what I saw The HTTP response status code 302 Found is a common way of performing URL redirection.
Question
I saw that there is a way to set allow_redirects to False in order to solve that problem. I am forced to use python 2.7. I can't use python 3.0. Is there a way to add allow_redirects to the request in python 2.7? I also can't use the requests library. I can use import requests.
#!/usr/bin/env python
import sys
import json
import httplib
# Retrieve list of errors from Error Viewer
def retrieve_errors_from_error_viewer(errors):
headers = {"Content-Type": "application/json","Accept": "text/html"}
data = {"dba": "XXX", "phase": "PROD"}
conn = httplib.HTTPConnection('errorviewer.toys.net')
conn.request('POST', '/api/errors', json.dumps(data), headers)
response = conn.getresponse()
print(str(response.status)+" "+response.reason)
print(response.read())
if __name__ == "__main__":
# Retrieve Errors From ErrorViewer
errors = []
retrieve_errors_from_error_viewer(errors)
If you use httplib2 instead of httplib, you have the follow_all_redirects option that should solve your problem.
try to download images with python
but only this picture can't download it
i don't know the reason cause when i run it, it just stop just nothing happen
no image , no error code ...
here's the code plz tell me the reason and solution plz..
import urllib.request
num=404
def down(URL):
fullname=str(num)+"jpg"
urllib.request.urlretrieve(URL,fullname)
im="https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg"
down(im)
this code will work for you try to change the url that you use and see result :
import requests
pic_url = "https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg"
cookies = dict(BCPermissionLevel='PERSONAL')
with open('aa.jpg', 'wb') as handle:
response = requests.get(pic_url, headers={"User-Agent": "Mozilla/5.0"}, cookies=cookies,stream=True)
if not response.ok:
print (response)
for block in response.iter_content(1024):
if not block:
break
handle.write(block)
What #MoetazBrayek says in their comment (but not answer) is correct: the website you're querying is blocking the request.
It's common for sites to block requests based on user-agent or referer: if you try to curl https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg you will get an HTTP error (403 Access Denied):
❯ curl -I https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg
HTTP/2 403
Apparently The Sun wants a browser's user-agent, and specifically the string "mozilla" is enough to get through:
❯ curl -I -A mozilla https://www.thesun.co.uk/wp-content/uploads/2020/09/67d4aff1-ddd0-4036-a111-3c87ddc0387e.jpg
HTTP/2 200
You will have to either switch to the requests package or replace your url string with a proper urllib.request.Request object so you can customise more pieces of the request. And apparently urlretrieve does not support Request objects so you will also have to use urlopen:
req = urllib.request.Request(URL, headers={'User-Agent': 'mozilla'})
res = urllib.request.urlopen(req)
assert res.status == 200
with open(filename, 'wb') as out:
shutil.copyfileobj(res, out)
I have a python script that downloads a snapshot from a camera. I use auth to login to the camera. For older style cameras it works with no issues, with the new one it doesn't. I have tested the link and credentials by copying them from my python script to make sure they work and they do, but i still can't login and I am not sure why. The commented url is the one that works. The uniview one doesn't. I have replaced the password with the correct one and i also tested the link in Chromium and it works.
import requests
#hikvision old cameras
#url = 'http://192.168.100.110/ISAPI/Streaming/channels/101/picture'
#uniview
url = 'http://192.168.100.108:85/images/snapshot.jpg'
r = requests.get(url, auth=('admin','password'))
if r.status_code == 200:
with open('/home/pi/Desktop/image.jpg', 'wb') as out:
for bits in r.iter_content():
out.write(bits)
else:
print(r.status_code)
print(r.content)
Below is the response I get
b'{\r\n"Response": {\r\n\t"ResponseURL": "/images/snapshot.jpg",\r\n\t"ResponseCode": 3,\r\n \t"SubResponseCode": 0,\r\n \t"ResponseString": "Not Authorized",\r\n\t"StatusCode": 401,\r\n\t"StatusString": "Unauthorized",\r\n\t"Data": "null"\r\n}\r\n}\r\n'
So it looks like hikvisio are using Basic_access_authentication while uniview are using Digest_access_authentication so according to the docs you need to change your request to:
from requests.auth import HTTPDigestAuth
r = requests.get(url, auth=HTTPDigestAuth('admin','password'))
I am trying to get an http response from a website using the requests module. I get status code 410 in my response:
<Response [410]>
From the documentation, it appears that the forwarding url for the web content may not be intentionally available to the clients. Is this indeed the case, or am I missing something? Trying to confirm if the webpage can be scrapped at all:
url='http://www.b2i.us/profiles/investor/ResLibraryView.asp?ResLibraryID=81517&GoTopage=3&Category=1836&BzID=1690&G=666'
try:
response = requests.get(url)
except requests.exceptions.RequestException as e:
print(e)
Some webisites don't respond well to HTTP requests with 'python-requests' as a User Agent String.
You can get a 200 OK response if you set the User-Agent header to 'Mozilla'.
url='http://www.b2i.us/profiles/investor/ResLibraryView.asp?ResLibraryID=81517&GoTopage=3&Category=1836&BzID=1690&G=666'
headers={'User-Agent':'Mozilla/5'}
response = requests.get(url, headers=headers)
print(response)
< Response [200] >
This works for Mac OSX, but I am having issues with the same approach in Windows on a VMWare virtual machine I run automated tasks from. Why would the behavior be different? Is there a separate workaround for Window machines?
I'm using requests module to retrieve content from the website kat.cr
and here is the code I used:
try:
response = requests.get('http://kat.cr')
response.raise_for_status()
except Exception as e:
print(e)
else:
return response.text
At first the code works just fine and I could retrieve the website source code, but then it stopped and I keep receiving this message "404 Client Error: Not Found for url: https://kat.cr"
I tried fixing this issue with user-agent like this:
from fake_useragent import UserAgent
try:
ua = UserAgent()
ua.update()
headers = {'User-Agent': ua.random}
response = requests.get(url, headers=headers)
response.raise_for_status()
except Exception as e:
print(e)
else:
return response.text
But this doesn't seem to work either
Can you please help me fix this problem and thanks.
I think that, as users suggested, you may be ip-blocked.
Try a proxy.
Proxies with Python 'Requests' module