upload video file by path on python not working - python

I tried to upload video file using python, the problem is the system cannot find the file even though i write the path of file. my code is like this:
import os
import requests
#step 1
host = 'https://blablabla.com'
test = {
"upload_phase" : "start",
"file_size" : 1063565
}
params = {
"access_token":my_access_token,
"fields":"video_id, start_offset, end_offset, upload_session_id",
}
vids = requests.post(host, params=params, data=test)
vids = vids.json()
try:
video_id= vids["video_id"],
start_offset= vids["start_offset"],
end_offset= vids["end_offset"],
upload_session_id= vids["upload_session_id"]
except:
pass
print(vids)
###############################################################################
#step 2
###############################################################################
test = {
"upload_phase" : "transfer",
"start_offset" : start_offset,
"upload_session_id": upload_session_id,
"video_file_chunk": os.path.realpath('/home/def/Videos/test.mp4')
}
params = {
"access_token":my_access_token,
"fields":"start_offset, end_offset",
}
vids = requests.post(host, params=params, data=test)
vids = vids.json()
try:
start_offset= vids["start_offset"],
end_offset= vids["end_offset"]
except:
pass
print(vids)
Many way i tried, like os.path.abspath, os.path, os.path.dirname, os.path.basename, os.path.isfile, os.path.isabs, os.path.isdir it's still doesn't work. even i give import os.path or import os.

In your code you just send path to your file as the string to server, but not file itself. You should try something like:
my_file = {'file_to_upload': open(os.path.realpath('/home/def/Videos/test.mp4'),'rb')}
# You should replace 'file_to_upload' with the name server actually expect to receive
# If you don't know what server expect to get, check browser's devconsole while uploading file manually
vids = requests.post(host, params=params, files=my_file)
Also note that you might need to use requests.Session() to be able to handle cookies, access token...

Related

Downloading multiple instances of one image from a website and saving them in a Windows folder

I would like to download multiple images from thispersondoesnotexist.com and save them in a Windows folder. The site uses an AI StyleGAN to generate thousands of face images, but the problem is that the site only generates one image per view, which is labelled as image.jpg, and this single image changes every time the page is reset.
I would like to write a Python script that retrieves and saves multiple instances of this randomly generated image, ending up with a number of different images.
I tried using the following script written by Nandhugp:
import urllib.request
import random
n=input("How many images do you need?")
val=int(n)
dir=input("Enter the directory you want to save")
for i in range(val):
file_name = random.randrange(1,10000)
full_file_name = dir + str(file_name) + '.jpg' #Insert your
def downloader(image_url,full_file_name):
urllib.request.urlretrieve(image_url,full_file_name)
downloader("https://www.thispersondoesnotexist.com/",full_file_name)
However, the script doesn't work for me; I'm sorry but I'm new to Python 3 and am also having some difficulty understanding file paths in Windows. When the script asks for the directory I want to save the files in, do I enter "c:\faces\" or "c:/faces/"?
I'm using Win 7 64-bit OS on an older laptop.
# python3
import requests
def download(fileName):
f = open(fileName,'wb')
f.write(requests.get('https://thispersondoesnotexist.com/image', headers={'User-Agent': 'My User Agent 1.0'}).content)
f.close()
for i in range(2000):
download(str(i)+'.jpg')
//node js
var request = require('request');
var fs = require('fs');
var sleep = function (duration) {
return new Promise(resolve => {
setTimeout(() => resolve(), duration)
});
}
var downloadFile = function (url, fileName) {
return new Promise(resolve => {
var req = request(url);
var file = fs.createWriteStream(fileName);
req.pipe(file);
req.on('end', () => {
resolve();
})
})
}
var main = async function () {
for (var i = 0; i < 100; i++) {
console.log('Downloading ' + i)
await downloadFile('https://thispersondoesnotexist.com/image', `${i}.jpg`)
await sleep(1000);
}
}
main();
If you were using Linux or Cygwin on Windows, you could just run this:
watch -n6 wget https://thispersondoesnotexist.com/image

How do I download images using the Scryfall API and Python

Pretty new to APIs and Python for that matter. I am need to grab images from Scryfall using their API. Here is the link to the API documentation. https://scryfall.com/docs/api
They are using json and the code looks like this. (https://api.scryfall.com/cards/cn2/78?format=json&pretty=true)
This is the part that contains the URIs to the images.
"image_uris": {
"small": "https://img.scryfall.com/cards/small/en/cn2/78.jpg?1517813031",
"normal": "https://img.scryfall.com/cards/normal/en/cn2/78.jpg?1517813031",
"large": "https://img.scryfall.com/cards/large/en/cn2/78.jpg?1517813031",
"png": "https://img.scryfall.com/cards/png/en/cn2/78.png?1517813031",
"art_crop": "https://img.scryfall.com/cards/art_crop/en/cn2/78.jpg?1517813031",
"border_crop": "https://img.scryfall.com/cards/border_crop/en/cn2/78.jpg?1517813031"
},
How would I grab the images at those URIs and download them?
I found this on github but I'm not really sure where to begin with it.
https://github.com/NandaScott/Scrython
I am using the "Default Cards" file on this page
https://scryfall.com/docs/api/bulk-data
You need to download the image data and save it locally. Step 1, getting the image data using Python:
import requests as req
img_uris = {
"small": "https://img.scryfall.com/cards/small/en/cn2/78.jpg?1517813031",
"normal": "https://img.scryfall.com/cards/normal/en/cn2/78.jpg?1517813031",
"large": "https://img.scryfall.com/cards/large/en/cn2/78.jpg?1517813031",
"png": "https://img.scryfall.com/cards/png/en/cn2/78.png?1517813031",
"art_crop": "https://img.scryfall.com/cards/art_crop/en/cn2/78.jpg?1517813031",
"border_crop": "https://img.scryfall.com/cards/border_crop/en/cn2/78.jpg?1517813031"
}
img_request = req.get(img_uris['normal'])
# Always test your response obj before performing any work!
img_request.raise_for_status()
The function raise_for_status() will raise whatever exception requests had while making the request. If nothing happens, that means we received a 200 response code indicating our request was good! Now step 2, saving the data:
import os
img_file = "queen_marchesa_norm.jpg"
with open(os.path.join(os.getcwd(), img_file), 'w') as f:
f.write(img_request.content)
Here we declare a filename, use that filename to make a writeable file object, and then write all the data from our img_request to our file object. If you want to learn more about requests check the documentation.

How can I save prestashop invoices automatically and manually?

I want to print out an invoice(pdf) automatically, what's recently saved on the server. And also making manual saving possible
I'm using prestashop 1.6.1 and the invoices are mostly downloaded from the prestashop admin page, but I needed more easier way to print out these invoices, so I made an adminpage for myself it looks like this:
The printer button has a href of the invoice generated address
like:"http://www.example.com/admin/index.php?controller=AdminPdf&submitAction=generateInvoicePDF&id_order=3230"
From the link I can download it and then print it when it's opened in pdf reader, but I want to do this in one click.
Soo... I made an script for automatically printing the pdf when its saved on some specific location
#! /usr/bin/python import os
import os
import time
import os.path
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class ExampleHandler(FileSystemEventHandler):
def on_created(self, event):
output=str(event.src_path.replace("./",""))
print(output)
#print event.src_path.replace("./","")
print "Got event for file %s" % event.src_path
os.system("lp -d HL2250DN %s" % output)
observer = Observer()
event_handler = ExampleHandler()
observer.schedule(event_handler, path='.',recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
There is two options to download it automatically to the server
1. Override PDF.php and PDFGenerator.php files like this:
PDF.php
class PDF extends PDFCore
{
public function render($display = true)
{
if($this->template == PDF::TEMPLATE_INVOICE)
parent::render('F', true);
return parent::render($display);
}
}
?>
PDFGenerator.php
<?php
class PDFGenerator extends PDFGeneratorCore
{
public function render($filename, $display = true)
{
if (empty($filename)) {
throw new PrestaShopException('Missing filename.');
}
$this->lastPage();
if ($display === true) {
$output = 'D';
} elseif ($display === false) {
$output = 'S';
} elseif ($display == 'D') {
$output = 'D';
} elseif ($display == 'S') {
$output = 'S';
} elseif ($display == 'F') {
$output = 'F';
$filename = '/folder/for/print_it/'.str_replace("#", "", $filename);
} else {
$output = 'I';
}
return $this->output($filename, $output);
}
}
?>
2. Use script to download
First attempt
The first option worked for automatic saving, but when I tried to save invoices manually I got an blank or broken pdf file. I also tried to change the pdf.php, but it dident work out for me. Also made an post about this: Prestashop saving invoices manually and automatically. No answers were given and I moved on second option.
Second attempt
I tried to download invoices with python script and it worked, but how can I know which one to download?
#!/usr/bin/env python
import requests
import webbrowser
url = "http://www.example.com/admin/index.php?controller=AdminLogin&token=5a01dc4e606bca6c26e95ddea92d3d15"
url2 = "http://www.example.com/admin/index.php?controller=AdminPdf&token=35b276c05aa6f5eb516737a8d534eb66&submitAction=generateInvoicePDF&id_order=3221"
payload = {'example': 'example',
'example': 'example',
'stay_logged_in':'2',
'submitLogin':'1',}
with requests.session() as s:
# fetch the login page
s.get(url)
# post to the login form
r = s.post(url, data=payload)
print(r.text)
response = s.get(url2)
with open('/tmp/metadataa.pdf', 'wb') as f:
f.write(response.content)
Soo the problem with this option is that.. How can I pass the href(what was clicked from the printer button) to url?
Solving this has been really frustrating and I know there is an simple and easy option for this, but I'm still looking for this.
Everytime you generate invoice PDF you are forcing it to be saved as local file.
What you want to do is add an extra GET parameter to the print button and check for its presence in the overriden class so that the PDF only gets stored as local file when you want to print directly.
So first add a GET parameter to print buttons eg. &print=1. Either in your template or wherever you are generating these buttons so that the button's href looks like this:
http://www.example.com/admin/index.php?controller=AdminPdf&submitAction=generateInvoicePDF&id_order=3230&print=1
Now you can check if parameter exists in PDF class and only then force the PDF to be outputted to local file.
class PDF extends PDFCore
{
public function render($display = true)
{
if($this->template == PDF::TEMPLATE_INVOICE && Tools::getValue('print') == 1) {
// Output PDF to local file
parent::render('F');
// Redirect back to the same page so you don't get blank page
Tools::redirectAdmin(Context::getContext()->link->getAdminLink('AdminMyController'));
}
else {
return parent::render($display);
}
}
}
You can keep the overriden PDFGenerator class as is.

Disable Plotly in Python from communicating with the network in any form

Is it possible to get Plotly (used from within Python) to be "strictly local"? In other words, is it possible to use it in a way that guarantees it won't contact the network for any reason?
This includes things like the program trying to contact the Plotly service (since that was the business model), and also things like ensuring clicking anywhere in the generated html won't either have a link to Plotly or anywhere else.
Of course, I'd like to be able to do this on a production machine connected to the network, so pulling out the network connection is not an option.
Even a simple import plotly already attempts to connect to the network as this example shows:
import logging
logging.basicConfig(level=logging.INFO)
import plotly
The output is:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): api.plot.ly
The connection is made when the get_graph_reference() function is called while the graph_reference module is initialized.
One way to avoid connecting to plot.ly servers is to set an invalid plotly_api_domain in ~/.plotly/.config. For me, this is not an option as the software is run on the client’s machine and I do not want to modify their configuration file. Additionally, it is also not yet possible to change the configuration directory through an environment variable.
One work-around is to monkey-patch requests.get before importing plotly:
import requests
import inspect
original_get = requests.get
def plotly_no_get(*args, **kwargs):
one_frame_up = inspect.stack()[1]
if one_frame_up[3] == 'get_graph_reference':
raise requests.exceptions.RequestException
return original_get(*args, **kwargs)
requests.get = plotly_no_get
import plotly
This is surely not a full solution but, if nothing else, this shows that plot.ly is currently not meant to be run completely offline.
I think I have come up with a solution for this. First, you need download the open source Plotly.js file. Then I have a function, written below, that will produce the javascript from the python plot and reference your local copy of plotly-latest.min.js. See below:
import sys
import os
from plotly import session, tools, utils
import uuid
import json
def get_plotlyjs():
path = os.path.join('offline', 'plotly.min.js')
plotlyjs = resource_string('plotly', path).decode('utf-8')
return plotlyjs
def js_convert(figure_or_data,outfilename, show_link=False, link_text='Export to plot.ly',
validate=True):
figure = tools.return_figure_from_figure_or_data(figure_or_data, validate)
width = figure.get('layout', {}).get('width', '100%')
height = figure.get('layout', {}).get('height', 525)
try:
float(width)
except (ValueError, TypeError):
pass
else:
width = str(width) + 'px'
try:
float(width)
except (ValueError, TypeError):
pass
else:
width = str(width) + 'px'
plotdivid = uuid.uuid4()
jdata = json.dumps(figure.get('data', []), cls=utils.PlotlyJSONEncoder)
jlayout = json.dumps(figure.get('layout', {}), cls=utils.PlotlyJSONEncoder)
config = {}
config['showLink'] = show_link
config['linkText'] = link_text
config["displaylogo"]=False
config["modeBarButtonsToRemove"]= ['sendDataToCloud']
jconfig = json.dumps(config)
plotly_platform_url = session.get_session_config().get('plotly_domain',
'https://plot.ly')
if (plotly_platform_url != 'https://plot.ly' and
link_text == 'Export to plot.ly'):
link_domain = plotly_platform_url\
.replace('https://', '')\
.replace('http://', '')
link_text = link_text.replace('plot.ly', link_domain)
script = '\n'.join([
'Plotly.plot("{id}", {data}, {layout}, {config}).then(function() {{',
' $(".{id}.loading").remove();',
'}})'
]).format(id=plotdivid,
data=jdata,
layout=jlayout,
config=jconfig)
html="""<div class="{id} loading" style="color: rgb(50,50,50);">
Drawing...</div>
<div id="{id}" style="height: {height}; width: {width};"
class="plotly-graph-div">
</div>
<script type="text/javascript">
{script}
</script>
""".format(id=plotdivid, script=script,
height=height, width=width)
#html = html.replace('\n', '')
with open(outfilename, 'wb') as out:
#out.write(r'<script src="https://cdn.plot.ly/plotly-latest.min.js"></script>')
out.write(r'<script src="plotly-latest.min.js"></script>')
for line in html.split('\n'):
out.write(line)
out.close()
print ('JS Conversion Complete')
The key lines that takes away all the links are:
config['showLink'] = show_link #False
....
config["modeBarButtonsToRemove"]= ['sendDataToCloud']
You would call the fuction as such to get a static HTML file that references your local copy of plotly open-sourced library:
fig = {
"data": [{
"x": [1, 2, 3],
"y": [4, 2, 5]
}],
"layout": {
"title": "hello world"
}
}
js_convert(fig, 'test.html')
I haven't done any extensive testing, but it looks like Plot.ly offers an "offline" mode:
https://plot.ly/python/offline/
A simple example:
from plotly.offline import plot
from plotly.graph_objs import Scatter
plot([Scatter(x=[1, 2, 3], y=[3, 1, 6])])
You can install Plot.ly via pip and then run the above script to produce a static HTML file:
$ pip install plotly
$ python ./above_script.py
When I run this from Terminal my web browser opens to the following file URL:
file:///some/path/to/temp-plot.html
This renders an interact graph that is completely local to your file system.

Python 3.5 / Pastebin "Bad API request, invalid api_option"

I'm working on a twitch irc bot and one of the components I wanted to have available was the ability for the bot to save quotes to a pastebin paste on close, and then retrieve the same quotes on start up.
I've started with the saving part, and have hit a road block where I can't seem to get a valid post, and I can't figure out a method.
#!/usr/bin/env python3
import urllib.parse
import urllib.request
# --------------------------------------------- Pastebin Requisites --------------------------------------------------
pastebin_key = 'my pastebin key' # developer api key, required. GET: http://pastebin.com/api
pastebin_password = 'password' # password for pastebin_username
pastebin_postexp = 'N' # N = never expire
pastebin_private = 0 # 0 = Public 1 = unlisted 2 = Private
pastebin_url = 'http://pastebin.com/api/api_post.php'
pastebin_username = 'username' # user corresponding with key
# --------------------------------------------- Value clean up --------------------------------------------------
pastebin_password = urllib.parse.quote(pastebin_password, safe='/')
pastebin_username = urllib.parse.quote(pastebin_username, safe='/')
# --------------------------------------------- Pastebin Functions --------------------------------------------------
def post(title, content): # used for posting a new paste
pastebin_vars = {'api_option': 'paste', 'api_user_key': pastebin_username, 'api_paste_private': pastebin_private,
'api_paste_name': title, 'api_paste_expire_date': pastebin_postexp, 'api_dev_key': pastebin_key,
'api_user_password': pastebin_password, 'api_paste_code': content}
try:
str_to_paste = ', '.join("{!s}={!r}".format(key, val) for (key, val) in pastebin_vars.items()) # dict to str :D
str_to_paste = str_to_paste.replace(":", "") # remove :
str_to_paste = str_to_paste.replace("'", "") # remove '
str_to_paste = str_to_paste.replace(")", "") # remove )
str_to_paste = str_to_paste.replace(", ", "&") # replace dividers with &
urllib.request.urlopen(pastebin_url, urllib.parse.urlencode(pastebin_vars)).read()
print('did that work?')
except:
print("post submit failed :(")
print(pastebin_url + "?" + str_to_paste) # print the output for test
post("test", "stuff")
I'm open to importing more libraries and stuff, not really sure what I'm doing wrong after working on this for two days straight :S
import urllib.parse
import urllib.request
PASTEBIN_KEY = 'xxx'
PASTEBIN_URL = 'https://pastebin.com/api/api_post.php'
PASTEBIN_LOGIN_URL = 'https://pastebin.com/api/api_login.php'
PASTEBIN_LOGIN = 'my_login_name'
PASTEBIN_PWD = 'yyy'
def pastebin_post(title, content):
login_params = dict(
api_dev_key=PASTEBIN_KEY,
api_user_name=PASTEBIN_LOGIN,
api_user_password=PASTEBIN_PWD
)
data = urllib.parse.urlencode(login_params).encode("utf-8")
req = urllib.request.Request(PASTEBIN_LOGIN_URL, data)
with urllib.request.urlopen(req) as response:
pastebin_vars = dict(
api_option='paste',
api_dev_key=PASTEBIN_KEY,
api_user_key=response.read(),
api_paste_name=title,
api_paste_code=content,
api_paste_private=2,
)
return urllib.request.urlopen(PASTEBIN_URL, urllib.parse.urlencode(pastebin_vars).encode('utf8')).read()
rv = pastebin_post("This is my title", "These are the contents I'm posting")
print(rv)
Combining two different answers above gave me this working solution.
First, your try/except block is throwing away the actual error. You should almost never use a "bare" except clause without capturing or re-raising the original exception. See this article for a full explanation.
Once you remove the try/except, and you will see the underlying error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "paste.py", line 42, in post
urllib.request.urlopen(pastebin_url, urllib.parse.urlencode(pastebin_vars)).read()
File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 461, in open
req = meth(req)
File "/usr/lib/python3.4/urllib/request.py", line 1112, in do_request_
raise TypeError(msg)
TypeError: POST data should be bytes or an iterable of bytes. It cannot be of type str.
This means you're trying to pass a unicode string into a function that's expecting bytes. When you do I/O (like reading/writing files on disk, or sending/receiving data over HTTP) you typically need to encode any unicode strings as bytes. See this presentation for a good explanation of unicode vs. bytes and when you need to encode and decode.
Next, this line:
urllib.request.urlopen(pastebin_url, urllib.parse.urlencode(pastebin_vars)).read()
Is throwing away the response, so you have no way of knowing the result of your API call. Assign this to a variable or return it from your function so you can then inspect the value. It will either be a URL to the paste, or an error message from the API.
Next, I think your code is sending a lot of unnecessary parameters to the API and your str_to_paste statements aren't necessary.
I was able to make a paste using the following, much simpler, code:
import urllib.parse
import urllib.request
PASTEBIN_KEY = 'my-api-key' # developer api key, required. GET: http://pastebin.com/api
PASTEBIN_URL = 'http://pastebin.com/api/api_post.php'
def post(title, content): # used for posting a new paste
pastebin_vars = dict(
api_option='paste',
api_dev_key=PASTEBIN_KEY,
api_paste_name=title,
api_paste_code=content,
)
return urllib.request.urlopen(PASTEBIN_URL, urllib.parse.urlencode(pastebin_vars).encode('utf8')).read()
Here it is in use:
>>> post("test", "hello\nworld.")
b'http://pastebin.com/v8jCkHDB'
I didn't know about pastebin until now. I read their api and tried it for the first time, and it worked perfectly fine.
Here's what I did:
I logged in to fetch the api_user_key.
Included that in the posting along with api_dev_key.
Checked the website, and the post was there.
Here's the code:
import urllib.parse
import urllib.request
def post(url, params):
data = urllib.parse.urlencode(login_params).encode("utf-8")
req = urllib.request.Request(login_url, data)
with urllib.request.urlopen(req) as response:
return response.read()
# Logging in to fetch api_user_key
login_url = "http://pastebin.com/api/api_login.php"
login_params = {"api_dev_key": "<the dev key they gave you",
"api_user_name": "<username goes here>",
"api_user_password": "<password goes here>"}
api_user_key = post(login_url, login_params)
# Posting some random text
post_url = "http://pastebin.com/api/api_post.php"
post_params = {"api_dev_key": "<the dev key they gave you",
"api_option": "paste",
"api_paste_code": "<head>Testing</head>",
"api_paste_private": "0",
"api_paste_name": "testing.html",
"api_paste_expire_date": "10M",
"api_paste_format": "html5",
"api_user_key": api_user_key}
response = post(post_url, post_params)
Only the first three parameters are needed for posting something, the rest are optional.
fwy the API doesn't seem to accept http requests as of writing this, so make sure to have the urls in the format of https://pas...

Categories

Resources