For my image classification project I need to collect classified images, and for me a good source would be different webcams around the world streaming video in the internet. Like this one:
https://www.skylinewebcams.com/en/webcam/espana/comunidad-valenciana/alicante/benidorm-playa-poniente.html
I don't really have any experience with video streaming and web scraping generally, so after searching for the info in internet, i came up with this naive code in python:
url='https://www.skylinewebcams.com/a816de08-9805-4cc2-94e6-2daa3495eb99'
r1 = requests.get(url, stream=True)
filename = "stream.avi"
if(r1.status_code == 200):
with open(filename,'w') as f:
for chunk in r1.iter_content(chunk_size=1024):
f.write(chunk)
else:
print("Received unexpected status code {}".format(r.status_code))
where the url address was taken from the source of the video block from the website:
<video data-html5-video=""
poster="//static.skylinewebcams.com/_2933625150.jpg" preload="metadata"
src="blob:https://www.skylinewebcams.com/a816de08-9805-4cc2-94e6-
2daa3495eb99"></video>
but it does not work (avi file is empty), even though in the browser video streaming is working good. Can anybody explain me how to capture this video stream into the file?
I've made some progress since then. Here is the code:
print ("Recording video...")
url='https://hddn01.skylinewebcams.com/02930601ENXS-1523680721427.ts'
r1 = requests.get(url, stream=True)
filename = "stream.avi"
num=0
if(r1.status_code == 200):
with open(filename,'wb') as f:
for chunk in r1.iter_content(chunk_size=1024):
num += 1
f.write(chunk)
if num>5000:
print('end')
break
else:
print("Received unexpected status code {}".format(r.status_code))
Now i can get some piece of video written in the file. What I've change is 1) in open(filename,'wb') changed 'w' to 'wb' to write binary data, but most important 2) changed url. I looked in Chrome devtools 'network' what requests are sent by browser to get the live stream, and just copied the most fresh one, it requests some .ts file.
Next, i've found out how to get the addresses of .ts video files. One can use m3u8 module (installable by pip) like this:
import m3u8
m3u8_obj = m3u8.load('https://hddn01.skylinewebcams.com/live.m3u8?
a=k2makj8nd279g717kt4d145pd3')
playlist=[el['uri'] for el in m3u8_obj.data['segments']]
The playlist of the video files will then be something like that
['https://hddn04.skylinewebcams.com/02930601ENXS-1523720836405.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720844347.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720852324.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720860239.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720868277.ts',
'https://hddn04.skylinewebcams.com/02930601ENXS-1523720876252.ts']
and I can download each of the video files from the list.
The only problem left, is that in order to load the playlist i need first to open the webpage in a browser. Otherwise the playlist is gonna be empty. Probably opening the webpage initiates the streaming and this creates m3u8 file on the server that can be requested. I still don't know how to initialize streaming from python, without opening the page in the browser.
The list turns out empty because you're making an HTTP request without headers (which means you're doing it programmatically for sure) and most sites just respond to those with 403 outright.
You should use a library like Requests or pycurl to add headers to your requests and they should work fine. For an example request (complete with headers), you can open your web browser's developer console while watching streaming, find an HTTP request for the m3u8 url, right-click on it, and "copy as cURL". Note that there are site-specific, arbitrary headers that may be required to be sent with each request.
If you want to scrape multiple sites with different headers, and/or want to future-proof your code for if they change the headers, addresses or formats, then you probably need something more advanced. Worst-case scenario, you might need to run a headless browser to open the site with WebDriver/Selenium and capture the requests it makes to generate your requests.
Keep in mind you might have to read each site's ToS or otherwise you might be performing illegal activities. Scraping while breaking the ToS is basically digital trespassing and I think at least craigslist has already won lawsuits based on that criteria.
Related
I am trying to download torrent file from this code :
url = "https://itorrents.org/torrent/0BB4C10F777A15409A351E58F6BF37E8FFF53CDB.torrent"
r = requests.get(url, allow_redirects=True)
open('test123.torrent', 'wb').write(r.content)
It downloads a torrent file , but when i load it to bittorrent error occurs.
It says Unable to Load , Torrent Is Not Valid Bencoding
Can anybody please help me to resolve this problem ? Thanks in advance
This page use cloudflare to prevent scraping the page,I am sorry to say that bypassing cloudflare is very hard if you only use requests, the measures cloudflare takes will update soon.This page will check your browser whether it support Javascript.If not, they won't give you the bytes of the file.That's why you couldn't use them.(You could use r.text to see the response content, it is a html page.Not a file.)
Under this circumstance, I think you should consider about using selenium.
Bypassing Cloudflare can be a pain, so I suggest using a library that handles it. Please don't forget that your code may break in the future because Cloudflare changes their techniques periodically. Well, if you use the library, you will just need to update the library (at least you should hope for that).
I used a similar library only in NodeJS, but I see python also has something like that - cloudscraper
Example:
import cloudscraper
scraper = cloudscraper.create_scraper() # returns a CloudScraper instance
# Or: scraper = cloudscraper.CloudScraper() # CloudScraper inherits from requests.Session
print scraper.get("http://somesite.com").text # => "<!DOCTYPE html><html><head>..."
Depending on your usage you may need to consider using proxies - CloudFlare can still block you if you send too many requests.
Also, if you are working with video torrents, you may be interested in Torrent Stream Server. It a server that downloads and streams video at the same time, so you can watch the video without fully downloading it.
We can do by adding cookies in headers .
But after some time cookie expires.
Therefore only solution is to download from opening browser
So far I'm seeing a lot of info on how to download a file with the Requests module, but nothing covering actually cancelling a download.
I'm currently looping through a series of URLs to see if I get a 200 response code or not. If I land on a URL that starts a file download (r.status_code == 200), how do I cancel this file download, or better yet: Preventing downloading in the first place?
Instead of GET you could use a HEAD request to get header information only.
r = requests.head('http://www.example.com')
I'm trying to export a CSV from this page via a python script. The complicated part is that the page opens after clicking the export button on this page, begins the download, and closes again, rather than just hosting the file somewhere static. I've tried using the Requests library, among other things, but the file it returns is empty.
Here's what I've done:
url = 'http://aws.state.ak.us/ApocReports/CampaignDisclosure/CDExpenditures.aspx?exportAll=True&%3bexportFormat=CSV&%3bisExport=True%22+id%3d%22M_C_sCDTransactions_csfFilter_ExportDialog_hlAllCSV?exportAll=True&exportFormat=CSV&isExport=True'
with open('CD_Transactions_02-27-2017.CSV', "wb") as file:
# get request
response = get(url)
# write to file
file.write(response.content)
I'm sure I'm missing something obvious, but I'm pulling my hair out.
It looks like the file is being generated on demand, and the url stays only valid as long as the session lasts.
There are multiple requests from the browser to the webserver (including POST requests).
So to get those files via code, you would have to simulate the browser, possibly including session state etc (and in this case also __VIEWSTATE ).
To see the whole communication, you can use developer tools in the browser (usually F12, then select NET to see the traffic), or use something like WireShark.
In other words, this won't be an easy task.
If this is open government data, it might be better to just ask that government for the data or ask for possible direct links to the (unfiltered) files (sometimes there is a public ftp server for example) - or sometimes there is an API available.
The file is created on demand but you can download it anyway. Essentially you have to:
Establish a session to save cookies and viewstate
Submit a form in order to click the export button
Grab the link which lies behind the popped-up csv-button
Follow that link and download the file
You can find working code here (if you don't mind that it's written in R): Save response from web-scraping as csv file
We have two servers (client-facing, and back-end database) between which we would like to transfer PDFs. Here's the data flow:
User requests PDF from website.
Site sends request to client-server.
Client server requests PDF from back-end server (different IP).
Back-end server sends PDF to client server.
Client server sends PDF to website.
1-3 and 5 are all good, but #4 is the issue.
We're currently using Flask requests for our API calls and can transfer text and .csv easily, but binary files such as PDF are not working.
And no, I don't have any code, so take it easy on me. Just looking for a suggestion from someone who may have come across this issue.
As you said you have no code, that's fine, but I can only give a few suggestions.
I'm not sure how you're sending your files, but I'm assuming that you're using pythons open function.
Make sure you are reading the file as bytes (e.g. open('<pdf-file>','rb'))
Cut the file up into chunks and send it as one file, this way it doesn't freeze or get stuck.
Try smaller PDF files, if this works definitely try suggestion #2.
Use threads, you can multitask with them.
Have a download server, this can save memory and potentially save bandwidth. Also it also lets you skip the PDF send back, from flask.
Don't use PDF files if you don't have to.
Use a library to do it for you.
Hope this helps!
I wanted to share my solution to this, but give credit to #CoolqB for the answer. The key was including 'rb' to properly read the binary file and including the codecs library. Here are the final code snippets:
Client request:
response = requests.get('https://www.mywebsite.com/_api_call')
Server response:
f = codecs.open(file_name, 'rb').read()
return f
Client handle:
with codecs.open(file_to_write, 'w') as f:
f.write(response.content)
f.close()
And all is right with the world.
http://puu.sh/3Krct.png
My program generates random links to a service that hosts images, and it grabs and downloads random images. The program makes a lot of requests, and so it has to go through proxies.
Well, when the program is started, I just give it the path to a fresh large proxy list; however, sometimes the proxies will not connect to the website and sometimes they will return a custom HTML page - OR the image service will return the message on the page "You don't have permission to view this image." Although, the program will still save the request and download the page with a .png extension
And so sometimes those HTML/text pages are saved as .png files:
http://puu.sh/3KrxM.png
http://puu.sh/3KrGN.png
Is there any way I can prevent the downloading of these pages, and only download the actual images?
Thank you.
if self.proxy != False:
#make our requests go through proxy
self.opener.retrieve(url, filename)
else:
urllib.request.urlretrieve(url, filename)
I think you should change the logic.
If a proxy returns an error getting the page you asked, it normally uses an HTTP Status Code != 200
You should then check in order:
The HTTP status != 200
The Content-type header returned for the correct type (in this case image/jpeg)
And for this type of tasks I suggest using the requests module.