Well guys I come here in times of need, I've been trying to develop a batch and the first step of this batch would be downloanding a zipped file from the web, the first code that I tryied was this
import requests
url = "http://servicos.ibama.gov.br/ctf/publico/areasembargadas/ConsultaPublicaAreasEmbargadas.php"
save_path = "C:/Users/gb2gaet"
proxies = {
"I had to erase this for safety reasons",
}
r = requests.get(url, proxies=proxies, stream=True, verify=False )
handle = open('test.zip', "wb")
for chunk in r.iter_content(chunk_size=512):
if chunk:
handle.write(chunk)
handle.close()
it turns out that I get a zipped file that can't be oppened, after a long search I came accross a possible solution that would be something like this
import requests, zipfile, io
url = "http://servicos.ibama.gov.br/ctf/publico/areasembargadas/ConsultaPublicaAreasEmbargadas.php"
save_path = "C:/Users/gb2gaet"
proxies = {
you know
}
r = requests.get(url, proxies=proxies, stream=True, verify=False )
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall(save_path)
but all I ended up getting was this error message
zipfile.BadZipFile: File is not a zip file
I'd be for ever gratefull if any of you guys could help me on this matter
from urllib.request import urlopen
open('Sample1.zip', 'wb').write(urlopen('Valid Url for Zip File').read())
Related
I am trying to download file from steamworkshopdownloader.io with requests but it always returns 500 error. What am I doing wrong? I am not very familiar with requests.
Code:
import requests
def downloadMap(map_id):
session = requests.session()
file = session.post("https://backend-02-prd.steamworkshopdownloader.io/api/details/file",
data={"publishedfileid": map_id})
print(file)
downloadMap("814218628")
If you want to download a file from this API try this code, it's adapted from the link in the comment I posted earlier (https://greasyfork.org/en/scripts/396698-steam-workshop-downloader/code) and converted into Python:
import requests
import json
import time
def download_map(map_id):
s = requests.session()
data = {
"publishedFileId": map_id,
"collectionId": None,
"extract": True,
"hidden": False,
"direct": False,
"autodownload": False
}
r = s.post('https://backend-01-prd.steamworkshopdownloader.io/api/download/request', data=json.dumps(data))
print(r.json())
uuid = r.json()['uuid']
data = f'{{"uuids":["{uuid}"]}}'
while True:
r = s.post('https://backend-01-prd.steamworkshopdownloader.io/api/download/status', data=data)
print(r.json())
if r.json()[uuid]['status'] == 'prepared':
break
time.sleep(1)
params = (('uuid', uuid),)
r = s.get('https://backend-01-prd.steamworkshopdownloader.io/api/download/transmit', params=params, stream=True)
print(r.status_code)
with open(f'./{map_id}.zip', 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
download_map(814218628)
The code demonstrates how to use the API and downloads a file named 814218628.zip (or whatever map_id was provided) into the directory the script is run from, the zip archive contains the .udk file (Game map design created by the Unreal Engine Development Kit).
This question already has answers here:
Download large file in python with requests
(8 answers)
Closed 4 years ago.
I'm trying to download a binary file and save it with its original name on the disk (linux).
Any ideas?
import requests
params = {'apikey': 'xxxxxxxxxxxxxxxxxxx', 'hash':'xxxxxxxxxxxxxxxxxxxxxxxxx'}
response = requests.get('https://www.test.com/api/file/download', params=params)
downloaded_file = response.content
if response.status_code == 200:
with open('/tmp/', 'wb') as f:
f.write(response.content)
From your clarification in the comments, your issue is that you want to keep the file's original name.
If the URL directs to the raw binary data, then the last part of the URL would be its "original name", hence you can get that by parsing the URL as follows:
local_filename = url.split('/')[-1]
To put this into practice, and considering the context of the question, here is the code that does exactly what you need, copied as it is from another SO question:
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter
r = requests.get(url, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
#f.flush() commented by recommendation from J.F.Sebastian
return local_filename
Couldn't post this as a comment, so had to put it in an answer. I hope I have been clear enough. Tell me if you have any issues with the code. And when the issue is resolved, please also inform me so I can then delete this as it's already been answered.
EDIT
Here is a version for your code:
import requests
url = 'https://www.test.com/api/file/download'
params = {'apikey': 'xxxxxxxxxxxxxxxxxxx', 'hash':'xxxxxxxxxxxxxxxxxxxxxxxxx', 'stream':True}
response = requests.get(url, params=params)
local_filename = url.split('/')[-1]
totalbits = 0
if response.status_code == 200:
with open(local_filename, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
totalbits += 1024
print("Downloaded",totalbits*1025,"KB...")
f.write(chunk)
NOTE: if you don't want it to show progress, just remove the printstatement on line 15. This was tested using this url: https://imagecomics.com/uploads/releases/_small/DeadRabbit-02_cvr.jpg and it seemed to work pretty well. Again, if you have any issues, just comment down below.
I'm having problems with a CSV file upload with requests.post method in python 3.
from requests.auth import HTTPBasicAuth
import csv
import requests
user='myuser'
pw='mypass'
advertiserid='10550'
campaignid='12394'
url='http://example.example.com/api/edc/upload/'+advertiserid+'/'+campaignid+'/'+'?encoding=utf-8&fieldsep=%3B&decimalsep=.&date=DD%2FMM%2FYYYY&info=1&process=1'
csv="myfile.csv"
with open(csv, 'r') as f:
r = requests.post(url, files={csv: f})
print(r)
The output is 'Response [502]'
Any idea of what could be the problem?
Many thanks!
You can refer the documentation of Requests library here: post-a-multipart-encoded-file
Change your request line to:
r = requests.post(url, files={'report.csv': f})
Try opening it in binary mode? And with specific 'text/csv' mime type?
with open(csv, 'rb') as f:
r = requests.post(url, files={'file': ('myfile.csv', f, 'text/csv', {'Expires': '0'})})
print(r.text)
If it still does not work, try without the binary, but still with the rest.
If it stiiill does not work, print the exact error message. And 502 (Bad Gateway) might just mean that you're not targetting the right url. (you're not targetting example.com, right?
csv="myfile.csv"
url='http://example.example.com/api/edc/upload/'+advertiserid+'/'+campaignid+'/'+'?encoding=utf-8&fieldsep=%3B&decimalsep=.&date=DD%2FMM%2FYYYY&info=1&process=1'
files = {'upload_file': open(csv,'rb')}
r = requests.post(url, files=files)
Imagine I have a rest API to import the CSV file (Multipart encoded file)
corresponding python request should be like below.
import requests
​
hierarchy_file_name = '/Users/herle/ws/LookoutLab/data/monitor/Import_Hierarchy_Testcase_v2.csv'
headers = {
'x-api-key': **REST_API_KEY**,
'x-api-token': **REST_API_TOKEN**,
'accept': 'application/json'
}
files = {'file': (hierarchy_file_name, open(hierarchy_file_name, 'rb'), 'text/csv')}
url = "https://abcd.com"
response = requests.post(url +'/api/v2/core/workspaces/import/validate',
files=files, verify=False, headers=headers)
print("Created")
print(response)
print(response.text)
Note:
Make sure that you don't add 'Content-Type': 'multipart/form-data' in the header
One site Đ¿enerating iPXE build imagea file that I need to download by sending a request.
I want to make a request for a post on the 3th site (rom-o-matic.eu), and get a file from site. Is this possible?
My example is this:
def requestPOST(request):
values = {'wizardtype': 'standard',
'outputformatstd': 'bin/ipxe.usb',
'embed': '#!ipxe dhcp route}',
'gitrevision': 'master'}
r = requests.post("https://rom-o-matic.eu/", verify=False, data={values})
return()
What should this return?
Thanks.
import requests
import shutil
def downloadPOST(outpath):
values = {
'wizardtype': 'standard',
'outputformatstd': 'bin/ipxe.usb',
'embed': '#!ipxe dhcp route}',
'gitrevision': 'master',
}
r = requests.get("https://rom-o-matic.eu/", data={values}, verify=False, stream=True)
if r.status_code != 200:
raise ValueError('Status code != 200')
with open(outpath, 'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
Based on
How to download image using requests
i have very large POSt data (over 100 MB) with one cookie, now i want to send it to a server through Python, the POSt request is in a file like this:
a=true&b=random&c=2222&d=pppp
This is my following code which only sends Cookies but not the POST content.
import requests
import sys
count = len(sys.argv)
if count < 3:
print 'usage a.py FILE URL LOGFILE'
else:
url = sys.argv[2]
data= {'file': open(sys.argv[1], 'rb')}
cookies = {'session':'testsession'}
r = requests.get(url, data=data, cookies=cookies)
f = open(sys.argv[3], 'w')
f.write(r.text)
f.close()
The code takes File which has POSt data, then the URL to send it , then the OUTPUT file where the response is to be stored.
Note: I am not trying to upload a file but to send the post content which is inside a file.
Firstly you should be using requests.post. Secondly if you want to post just the data inside the file you need to read the contents of the file and parse it to a dict since this is the format that requests.post expects data to come in.
Example: (Note: Just showing the relevant parts)
import urlparse
import requests
import sys
with open(sys.argv[1], 'rb') as f:
data = urlparse.parse_qs('&'.join(f.read().splitlines()))
r = requests.post(url, data=data, cookies=cookies)