So far I'm seeing a lot of info on how to download a file with the Requests module, but nothing covering actually cancelling a download.
I'm currently looping through a series of URLs to see if I get a 200 response code or not. If I land on a URL that starts a file download (r.status_code == 200), how do I cancel this file download, or better yet: Preventing downloading in the first place?
Instead of GET you could use a HEAD request to get header information only.
r = requests.head('http://www.example.com')
Related
I have a database of thousands of files online, and I want to check what their status is (e.g. if the file exists, if it sends us to a 404, etc.) and update this in my database.
I've used urllib.request to download files to a python script. However, obviously downloading terabytes of files is going to take a long time. Parallelizing the process would help, but ultimately I just don't want to download all the data, just check the status. Is there an ideal way to check (using urllib or another package) the HTTP response code of a certain URL?
Additionally, if I can get the file size from the server (which would be in the HTTP response), then I can also update this in my database.
If your web server is standards-based, you can use a HEAD request instead of a GET. It returns the same status without actually fetching the page.
The requests module can check the status response of a request.
Just do:
import requests
url = 'https://www.google.com' # Change to your link
response = requests.get(url)
print(response.status_code)
this code shows me 200, so the request has been successful
I am trying to download torrent file from this code :
url = "https://itorrents.org/torrent/0BB4C10F777A15409A351E58F6BF37E8FFF53CDB.torrent"
r = requests.get(url, allow_redirects=True)
open('test123.torrent', 'wb').write(r.content)
It downloads a torrent file , but when i load it to bittorrent error occurs.
It says Unable to Load , Torrent Is Not Valid Bencoding
Can anybody please help me to resolve this problem ? Thanks in advance
This page use cloudflare to prevent scraping the page,I am sorry to say that bypassing cloudflare is very hard if you only use requests, the measures cloudflare takes will update soon.This page will check your browser whether it support Javascript.If not, they won't give you the bytes of the file.That's why you couldn't use them.(You could use r.text to see the response content, it is a html page.Not a file.)
Under this circumstance, I think you should consider about using selenium.
Bypassing Cloudflare can be a pain, so I suggest using a library that handles it. Please don't forget that your code may break in the future because Cloudflare changes their techniques periodically. Well, if you use the library, you will just need to update the library (at least you should hope for that).
I used a similar library only in NodeJS, but I see python also has something like that - cloudscraper
Example:
import cloudscraper
scraper = cloudscraper.create_scraper() # returns a CloudScraper instance
# Or: scraper = cloudscraper.CloudScraper() # CloudScraper inherits from requests.Session
print scraper.get("http://somesite.com").text # => "<!DOCTYPE html><html><head>..."
Depending on your usage you may need to consider using proxies - CloudFlare can still block you if you send too many requests.
Also, if you are working with video torrents, you may be interested in Torrent Stream Server. It a server that downloads and streams video at the same time, so you can watch the video without fully downloading it.
We can do by adding cookies in headers .
But after some time cookie expires.
Therefore only solution is to download from opening browser
For my image classification project I need to collect classified images, and for me a good source would be different webcams around the world streaming video in the internet. Like this one:
https://www.skylinewebcams.com/en/webcam/espana/comunidad-valenciana/alicante/benidorm-playa-poniente.html
I don't really have any experience with video streaming and web scraping generally, so after searching for the info in internet, i came up with this naive code in python:
url='https://www.skylinewebcams.com/a816de08-9805-4cc2-94e6-2daa3495eb99'
r1 = requests.get(url, stream=True)
filename = "stream.avi"
if(r1.status_code == 200):
with open(filename,'w') as f:
for chunk in r1.iter_content(chunk_size=1024):
f.write(chunk)
else:
print("Received unexpected status code {}".format(r.status_code))
where the url address was taken from the source of the video block from the website:
<video data-html5-video=""
poster="//static.skylinewebcams.com/_2933625150.jpg" preload="metadata"
src="blob:https://www.skylinewebcams.com/a816de08-9805-4cc2-94e6-
2daa3495eb99"></video>
but it does not work (avi file is empty), even though in the browser video streaming is working good. Can anybody explain me how to capture this video stream into the file?
I've made some progress since then. Here is the code:
print ("Recording video...")
url='https://hddn01.skylinewebcams.com/02930601ENXS-1523680721427.ts'
r1 = requests.get(url, stream=True)
filename = "stream.avi"
num=0
if(r1.status_code == 200):
with open(filename,'wb') as f:
for chunk in r1.iter_content(chunk_size=1024):
num += 1
f.write(chunk)
if num>5000:
print('end')
break
else:
print("Received unexpected status code {}".format(r.status_code))
Now i can get some piece of video written in the file. What I've change is 1) in open(filename,'wb') changed 'w' to 'wb' to write binary data, but most important 2) changed url. I looked in Chrome devtools 'network' what requests are sent by browser to get the live stream, and just copied the most fresh one, it requests some .ts file.
Next, i've found out how to get the addresses of .ts video files. One can use m3u8 module (installable by pip) like this:
import m3u8
m3u8_obj = m3u8.load('https://hddn01.skylinewebcams.com/live.m3u8?
a=k2makj8nd279g717kt4d145pd3')
playlist=[el['uri'] for el in m3u8_obj.data['segments']]
The playlist of the video files will then be something like that
['https://hddn04.skylinewebcams.com/02930601ENXS-1523720836405.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720844347.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720852324.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720860239.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720868277.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720876252.ts']
and I can download each of the video files from the list.
The only problem left, is that in order to load the playlist i need first to open the webpage in a browser. Otherwise the playlist is gonna be empty. Probably opening the webpage initiates the streaming and this creates m3u8 file on the server that can be requested. I still don't know how to initialize streaming from python, without opening the page in the browser.
The list turns out empty because you're making an HTTP request without headers (which means you're doing it programmatically for sure) and most sites just respond to those with 403 outright.
You should use a library like Requests or pycurl to add headers to your requests and they should work fine. For an example request (complete with headers), you can open your web browser's developer console while watching streaming, find an HTTP request for the m3u8 url, right-click on it, and "copy as cURL". Note that there are site-specific, arbitrary headers that may be required to be sent with each request.
If you want to scrape multiple sites with different headers, and/or want to future-proof your code for if they change the headers, addresses or formats, then you probably need something more advanced. Worst-case scenario, you might need to run a headless browser to open the site with WebDriver/Selenium and capture the requests it makes to generate your requests.
Keep in mind you might have to read each site's ToS or otherwise you might be performing illegal activities. Scraping while breaking the ToS is basically digital trespassing and I think at least craigslist has already won lawsuits based on that criteria.
I'm trying to export a CSV from this page via a python script. The complicated part is that the page opens after clicking the export button on this page, begins the download, and closes again, rather than just hosting the file somewhere static. I've tried using the Requests library, among other things, but the file it returns is empty.
Here's what I've done:
url = 'http://aws.state.ak.us/ApocReports/CampaignDisclosure/CDExpenditures.aspx?exportAll=True&%3bexportFormat=CSV&%3bisExport=True%22+id%3d%22M_C_sCDTransactions_csfFilter_ExportDialog_hlAllCSV?exportAll=True&exportFormat=CSV&isExport=True'
with open('CD_Transactions_02-27-2017.CSV', "wb") as file:
# get request
response = get(url)
# write to file
file.write(response.content)
I'm sure I'm missing something obvious, but I'm pulling my hair out.
It looks like the file is being generated on demand, and the url stays only valid as long as the session lasts.
There are multiple requests from the browser to the webserver (including POST requests).
So to get those files via code, you would have to simulate the browser, possibly including session state etc (and in this case also __VIEWSTATE ).
To see the whole communication, you can use developer tools in the browser (usually F12, then select NET to see the traffic), or use something like WireShark.
In other words, this won't be an easy task.
If this is open government data, it might be better to just ask that government for the data or ask for possible direct links to the (unfiltered) files (sometimes there is a public ftp server for example) - or sometimes there is an API available.
The file is created on demand but you can download it anyway. Essentially you have to:
Establish a session to save cookies and viewstate
Submit a form in order to click the export button
Grab the link which lies behind the popped-up csv-button
Follow that link and download the file
You can find working code here (if you don't mind that it's written in R): Save response from web-scraping as csv file
I have been having problems with a script I am developing whereby I am receiving no output and the memory usage of the script is getting larger and larger over time. I have figured out the problem lies with some of the URLs I am checking with the Requests library. I am expecting to download a webpage however I download a large file instead. All this data is then stored in memory causing my issues.
What I want to know is; is there any way with the requests library to check what is being downloaded? With wget I can see: Length: 710330974 (677M) [application/zip].
Is this information available in the headers with requests? If so is there a way of terminating the download upon figuring out it is not a HTML webpage?
Thanks in advance.
Yes, the headers can tell you a lot about the page, most pages will include a Content-Length header.
By default, however, the request is downloaded in its entirety before the .get() or .post(), etc. call returns. Set the stream=True keyword to defer loading the response:
response = requests.get(url, stream=True)
Now you can inspect the headers and just discard the request if you don't like what you find:
length = int(response.headers.get('Content-Length', 0))
if length > 1048576:
print 'Response larger than 1MB, discarding
Subsequently accessing the .content or .text attributes, or the .json() method will trigger a full download of the response.