Perhaps I'm misunderstanding how .torrent files work, but is there a way in python to download the actual referenced content the .torrent file points to that would be downloaded say using a torrent client such as utorrent, but from the shell/command line using python?
The following works for simply downloading a .torrent file and sure I could open the torrent client as well do download the .torrent, but I'd rather streamline the process in the command line. Can't seem to find much online about doing this...
torrent = torrentutils.parse_magnet(magnet)
infohash = torrent['infoHash']
session = requests.Session()
session.headers.update({'User-Agent': 'Mozilla/5.0'})
url = "http://torcache.net/torrent/" + infohash + ".torrent"
answer = session.get(url)
torrent_data = answer.content
buffer = BytesIO(torrent_data)
gz = gzip.GzipFile(fileobj = buffer)
output = open(torrent['name'], "wb")
output.write(torrent_data)
As far as I know i can't use libtorrent for python3 on a 64 bit windows os.
If magnet: links work in your web browser then a simple way to start a new torrent download from your Python script is to open the url using your web browser:
import webbrowser
webbrowser.open(magnet_link)
Or from the command-line:
$ python -m webbrowser "magnet:?xt=urn:btih:ebab37b86830e1ed624c1fdbb2c59a1800135610&dn=StackOverflow201508.7z"
The download is performed by your actual torrent client such as uTorrent.
BitTornado works on Windows and has a command line interface. Take a look at btdownloadheadless.py. But this written in Python 2.
http://www.bittornado.com/download.html
Related
I am trying to download a file or folder from my gitlab repository, but they only way I have seen to do it is using CURL and command line. Is there any way to download files from the repository with just the python-gitlab API? I have read through the API and have not found anything, but other posts said it was possible, just gave no solution.
You can do like this:
import requests
response = requests.get('https://<your_path>/file.txt')
data = response.text
and then save the contents (data) as file...
Otherwise use the API:
f = project.files.get(path='<folder>/file.txt',ref='<branch or commit>')
and then decode using:
import base64
content = base64.b64decode(f.content)
and then save content as file...
I have a list of Url of files that open as Download Dialogue box with an option to save and open.
I'm using the python requests module to download the files. While using Python IDLE I'm able to download the file with the below code.
link = fileurl
r = requests.get(link,allow_redirects=True)
with open ("a.torrent",'wb') as code:
code.write(r.content)
But when I use this code along with for loop, the file which gets downloaded is corrupted or says unable to open.
for link in links:
name = str(links.index(link)) ++ ".torrent"
r = requests.get(link,allow_redirects=True)
with open (name,'wb') as code:
code.write(r.content)
If you are trying to download a video from a website, try
r = get(video_link, stream=True)
with open ("a.mp4",'wb') as code:
for chunk in r.iter_content(chunk_size=1024):
file.write(chunk)
I'm trying to download a stored file on file.io, but the problem is that I get a 2kb file. How can I download it?
When opening the link in the browser I get the download window. Here there is the code I'm using.
url = "https://www.file.io/"
r = requests.get(url, allow_redirects=True)
filename = "filedownloaded"
open(filename, 'wb').write(r.content)
file.io has an api for cURL that is very easy to use.
You need to know the target file extension and one time url. For example if I upload a png to file.io this would be my cURL request and the file will be downloaded in the current directory.
curl -o test.png https://file.io/fileURL
Since you are writing a script for this I am assuming that you will have this information.
import os
directory = "cd /target/directory/"
curlReq = "curl -o "
#you will have to retrive the info for these variables
filename = "filename.extension "
url = "https://file.io/uniqueURL"
os.system(directory)
os.system(curlReq + filename + url)
I there might be other ways but this worked for me.
EDIT: cURL request using subprocess.
from subprocess import call
url = "https://file.io/uniqueURL"
filename = "filename.extension"
call(["cd","/target/directory/"])
call(["curl","-o",filename,url])
I'm new to python. And I extracted some links from twitter and tried to use the Memento Aggregator to create a histogram. I've already used docker online playground to open the Memento Aggregator and how do I write a program to implement these links from a txt to the Memento Aggregator?
My current code is
import requests
host = 'https://www.katacoda.com/courses/docker/playground'
with open('Extracted links.txt', 'w') as f:
for line in f:
response = requests.get(host+"/timemap/json/"+line)
I used tweepy to extract some links. Here are some samples.
https://www.mytownneo.com/sports/20200215/no-comeback-this-time-barberton-crushes-tallmadge-boys-basketball
https://twitter.com/i/web/status/1228709479589404672
https://www.sctimes.com/story/sports/2020/02/14/albany-beats-no-1-sauk-centre-final-seconds/4768857002/
http://www.fiba.basketball/fiba-once-again-top-in-international-sports-federations-social-media-ranking-report-for-2019
https://twitter.com/i/web/status/1228709487600640000
How to write a python program to implement them in the docker containment and use the memogator to process these links?
I use this playground to run memogator.(https://www.katacoda.com/courses/docker/playground) I tried to let Memogator timemap to process these links but I got stuck at implementing these url links to the MemoGator.This is MemoGator which can process links. And here is the GitHub link. github.com/oduwsdl/memgator enter image description here
You can use wget package to download the file and read the file:
import wget
url="your URL"
filename = wget.download(url)
#Read csv/text
pdf = pd.read_csv(filename)
print ("Shape of dataset: ", pdf.shape)
print(pdf.head(5))
I'm using a script to download images from a website and up until recently it has been working fine. Of note the page is https, but per the urllib docs that shouldn't be an issue. It first requests the page and uses a regex to pull the download links from the page. From there the script goes into a loop to download the file and the inner loop looks like so:
dllink = m[0].replace('\">Download','')
print dllink
#m = re.findall('[a-f0-9]+.[\w]+',dllink)
extension = re.findall('.[\w]+$',dllink)[0]
fname = post_id + extension
urllib.urlretrieve(dllink,cpath + "/" + fname)
printLine(post_id + " ")
delay = random.uniform(32.0,64.0)
dlcount = dlcount + 1
time.sleep(delay)
Again, it downloads a file, but the files I'm downloading are on the order of 200k-4m and every file has started returning 4k. I've copy-pasted the download links into browsers and it pulls the right image and wget also downloads it just fine, so I'm not sure what's wrong with my code where it's only downloading 4k of the file. If this is a server side issue is there a way to call wget from python to accomplish the same thing without urlretreive? Thanks in advance for any help!