How to download pdf files using Python? - python

I was looking for a way to download pdf files in python, and I saw answers on other questions recommending the urllib module. I tried to download a pdf file using it, but when I try to open the downloaded file, a message shows up saying that the file cannot be opened.
error message
This is the code I used-
import urllib
urllib.urlretrieve("http://papers.gceguide.com/A%20Levels/Mathematics%20(9709)/9709_s11_qp_42.pdf", "9709_s11_qp_42.pdf")
What am I doing wrong? Also, the file automatically saves to the directory my python file is in. How do I change the location to which it gets saved?
Edit-
I tried again with the link to a sample pdf, http://unec.edu.az/application/uploads/2014/12/pdf-sample.pdf
The code is working with this link, so why won't it work for the other one?

Try this. It works.
import requests
url='https://pdfs.semanticscholar.org/c029/baf196f33050ceea9ecbf90f054fd5654277.pdf'
r = requests.get(url, stream=True)
with open('C:/Users/MICRO HARD/myfile.pdf', 'wb') as f:
f.write(r.content)

You can also use wget to download pdfs via a link:
import wget
wget.download(link)
Here's a guide about how to search & download all pdf files from a webpage in one go: https://medium.com/the-innovation/notesdownloader-use-web-scraping-to-download-all-pdfs-with-python-511ea9f55e48

You can't download the pdf content from the given url using
requests or urllib.
Because initially the given url was pointed to another web page after that
only it loads the pdf.
If you have doubt save the response as html instead of pdf.
You need to use headless browsers like panthomJS to download files
from these kind of web pages.

Related

Python requests module not following redirect for file download

Trying to automate downloading a .zip file from the link here (the links will always be different, but they are always in this format):
If this link is entered into a web browser, it downloads a file called Badges.zip. When trying to download it from Python with the code below, it saves to Badges.zip, but the .zip is not an archive. It's some Google Analytics code. It's like the requests module is not redirecting all the way to the file. I've tried get, head, trying to stream the download, and lots of other ways and I can't get it to download the file correctly. Here's the current code I'm using:
import requests
url = "https://schools.clever.com/files/badges.zip?fromEmail=1&randomID=5f9cffb0ee8c81418ac2e019"
r = requests.get(url, allow_redirects=True)
open('c:/data/Badges.zip', 'wb').write(r.content)
I'm open to any ideas. Have tried other modules and get similar results. I'm even open to kicking off external utilities if needed like wget or curl (which I haven't had any luck with yet either).
Note that the Clever Badges in this download have been voided to prevent use.
Thanks!

How to use requests to download files with original names?

I made a little script to automate some downloads, but I have a little issue. For example, the link is www.link.com/29292292.pdf but If I press the download button in my browser the name of the file is file with a good name.pdf.
I don't know to tell to requests to download the file but with 'file with a good name.pdf'.

Downloading an excel report from website in python saves a blank file

I have about 8 reports that I need to pull from a system every week which takes quite a bit of time so I am working on automating this process. I am using requests to login to the site and download the files. However, when I download the file using my python script the file comes back blank. When I use the same link to download from the browser its not blank. Below is my code:
payload = {
'txtUsername': 'uid',
'txtPassword': 'pass'
}
domain = 'https://example.com/login.aspx?ReturnUrl=%2fiweb%2f'
path = 'C:\\Users\\workspace\\data-in\\'
with requests.Session() as s:
p = s.post(domain, data=payload)
r = s.get('https://example.com/forms/MSWordFromSql.aspx?ContentType=excel&object=Organization&FormKey=f326228c-3c49-4531-b80d-d59600485557')
with open(path + 'report1.xls', 'wb') as f:
f.write(r.content)
A little about the url. When I was looking for the url I found that it's wrapped in some JS.
Export Raw Data to Excel
However, when I take a look at the path from which the files was downloaded the true location for the report is this:
https://example.com/forms/MSWordFromSql.aspx?ContentType=excel&object=Organization&FormKey=f326228c-3c49-4531-b80d-d59600485557
This is the URL I am using in my code to download a report. After I run the script the file is created, named and saved to the correct directory but its empty. As I mentioned at the top of the thread, if I simply copy the URL about to the browser it downloads the report with no problem.
I was also thinking about using Selenium to get this done but the issue is I cannot rename the files while they are being downloaded. I need each file to have a specific name because all of the downloaded reports are then used in another automation script.
As #Lucas mentioned, your Python code likely sends a different request than your browser does, and thus receives a different response.
I'd use the browser dev tools to inspect the request the browser makes to initiate the download. Use "Copy as curl" and try to reproduce the correct behavior from the command line.
Then reduce the differences between the curl request and the one your python code makes by removing unnecessary parts from the curl invocations and adding the necessary headers to your python code. https://curl.trillworks.com/ can help with the latter.

How to download a video when I get the URL of the MP4 file in selenium python? (WITHOUT URLLIB)

I can get to the point where the video is right in front of me. I need to loop through the urls and download all of these videos. The issue is that the request is stateful and I cannot use urllib because then authorization issues occur. How do I just target the three dots in chrome video viewer and download the file?
All I need now is to be able to download by clicking on the download button. I do not know if it can be done without the specification of coordinates. Like I said, the urls follow a pattern and I can generate them. The only issue is the authorization. Please help me get the videos through selenium.
Note that the video is in JavaScript so I cannot really target the three dots or the download button.
You can get the cookies from the driver and pass the information to the Request session. So, you can download with the Requests library.
import requests
cookies = driver.get_cookies()
s = requests.Session()
for cookie in cookies:
s.cookies.set(cookie['name'], cookie['value'])
response = s.get(urlDownload, stream=True)
print(response.status_code)
with open(fileName,'wb') as f:
f.write(response.content)
you can use selenium in python 2 as you have only pics i cannot give you a real code but something like that will help.you can find XPath by inspecting HTML
import selenium
driver.find_element_by_xpath('xpath of 3 dots')

script that able to download zip file from server

Can you please help me to make script in python that do the following:
download zip file http (I already have a code for this one)
download zip file in file://<server location>, I have problem with this one. the location of the file is in file://<server location>file.zip
can't download the #2 file :(
Code below, #1 is working if using HTTP, but when using file://// it's not working. Anybody has idea how to download a zip file from file:////?
import urllib2
response = urllib2.urlopen('file:////server/file.zip')
print response.info()
html = response.read()
# do something
response.close() # best practice to close the file
urllib2 does not have handlers for the file:// protocol; I think it will open local files if there is no protocol given (//server/file.zip), but I've never used that, and haven't tested it. If you have a local file name, you can just use open() and read() rather than urrlib2.
Your code will be simpler if you use with closing (from contextlib); opened files are already context managers in Python 2.7 and 3.x, so they're even easier to use.

Categories

Resources