Authentication issue when uploading video to YouTube using YouTube API and cron - python

I am trying to upload a video to YouTube each afternoon using the sample YouTube Python upload script from Google (see Python code examples on developers.google, I haven't enough reputation to post more links...). I would like to run it as a cronjob. I have created the client_secrets.json file and tested the script manually. It works fine when I run it manually, however when I run the script as a cronjob I get the following error:
To make this sample run you will need to populate the
client_secrets.json file found at:
/usr/local/cron/scripts/client_secrets.json
with information from the Developers Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please
visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
I've included the information in the JSON file already and the -oauth2.json file is also present in /usr/local/cron/scripts.
Is the issue because the cronjob is running the script as root and somehow the credentials in one of those two files are no longer valid? Any ideas how I can enable the upload with cron?
Cheers
James

Ok, so 7 months later I've come back to this cron issue. It turns out that the upload2youtube.py example file was hardcoded to look in the current directory for the clients_secrets.json file. This explains why I could run it manually from the local directory but not on cron. I've included the full path in the example file and this works fine now.

Related

How can I get the token-sheets.pickle and token-drive.pickle files to use ezsheets for Google Sheets?

I am trying to set up ezsheets for the use with Google Sheets. I followed the instructions from here https://ezsheets.readthedocs.io/en/latest/ and here https://automatetheboringstuff.com/2e/chapter14/
The set up process works quite differently on my computer: Somehow I could download the credentials-sheets.json. I need to download the token-sheets.pickle and token-drive.pickle files. When I run import ezsheets, no browser window is opended as described in the set up instructions. Nothing happens.
Is there another way to download both files?
I followed the steps you referenced and managed to generate the files, but I also encountered the same issue before figuring out the cause. The problem is that there are a few possible causes and the script silently fails without telling you exactly what happened.
Here are a few suggestions:
First off you need to configure your OAuth Consent Screen. You won't be able to create the credentials without it.
Make sure that you have the right credentials file. To generate it you have to go to the Credentials page in the Cloud Console. The docs say that you need an OAuth Client ID. Make sure that you have chosen the correct app at the top.
Then you will be prompted to choose an application type. According to the docs you shared the type should be "Other", but this is no longer available so "Desktop app" is the best equivalent if you're just running a local script.
After that you can just choose a name and create the credentials. You will be prompted to download the file afterwards.
Check that the credentials-sheets.json file has that exact name.
Make sure that the credentials-sheets.json file is located in the same directory where you're running your python script file or console commands.
Check that you've enabled both the Sheets and Drive API in your GCP Project.
Python will try to setup a temporary server on http://localhost:8080/ to retrieve the pickle files. If another application is using port 8080 then it will also fail. In my case a previously failed Python script was hanging on to that port.
To find and close the processes using port 8080 you can refer to this answer for Linux/Mac or this other answer for Windows. Just make sure that the process is not something you're currently using.
I just used the single import ezsheets command to get the files so after getting the token-sheets.pickle I had to run it again to get the token-drive.pickle, but after that the library should detect that you already have the files.

GitHub Actions - Where are downloaded files saved?

I've seen plenty of questions and docs about how to download artifacts generated in a workflow to pass between jobs. However, I've only found one thread about persisting downloaded files between steps of the same job, and am hoping someone can help clarify how this should work, as the answer on that thread doesn't make sense to me.
I'm building a workflow that navigates a site using Selenium and exports data manually (sadly there is no API). When running this locally, I am able to navigate the site just fine and click a button that downloads a CSV. I can then re-import that CSV for further processing (ultimately, it's getting cleaned and sent to Redshift). However, when I run this in GitHub Actions, I am unclear where the file is downloaded to, and am therefore unable to re-import it. Some things I've tried:
Echoing the working directory when the workflow runs, and setting up my pandas.read_csv() call to import the file from that directory.
Downloading the file and then echoing os.listdir() to print the contents of the working directory. When I do this, the CSV file is not listed, which makes me believe it was not saved to the working directory as expected. (which would explain why #1 doesn't work)
FWIW, the website in question does not give me the option to choose where the file downloads. When run locally, I hit the button on the site, and it automatically exports a CSV to my Downloads folder. So I'm at the mercy of wherever GitHub decides to save the file.
Last, because I feel like someone will suggest this - it is not an option for me to use read_html() to scrape the file from the page's HTML.
Thanks in advance!

EZSheets Module won't open Window to log into Google account

I'm learning Python using Automate the Boring Stuff and I am currently on Chapter 14 which centers on using EZSheets to work with Google Sheets.
The book says to place the credentials-sheets.json file in the same folder as the Python scripts, so I placed it in the "\Python38\Scripts" folder.
The next step is to run import ezsheets and it will open a new browser window for me to log in to my Google account. Here is where I am stuck. Importing EZSheets does nothing. I don't get any errors, but it doesn't open the window for me to log into my Google account.
All the code I used was:
import ezsheets
I just spent about the last 2 hours tackling this problem. As someone with a thin layer of SQL and Python experience...I was very frustrated with your same problem. Here's what I did when I got stuck in a couple places.
It might just be that you need to change the directory you are working in to "\Python38\Scripts" and then run the script, like you mentioned. If not, here's what I did next.
I followed the ezsheets doc's, much like you, except I skipped over the Google Python Quickstart guide linked in the doc. No problem, I circled back and found it.
"Setting up the Same" was my next issue. First problem there: the documentation says to name your credentials "credentials-sheets.json" while the Google quickstart.py file calls for "credentials.json". You have to change that in the quickstart.py file (and quickstart.py should be in the same file location as the credentials. In your case: "\Python38\Scripts").
Look for this
flow = InstalledAppFlow.from_client_secrets_file(
'credentials-sheets.json', SCOPES)
The above is what I fixed in the quickstart.py file (to be the same as the ezsheets documentation) and worked for me.
Then, I was getting a 400 error when running Quickstart. The browser window would open and it would give me this Authorization Error: "Error 400: redirect_uri_mismatch". It specified that I did not have the right redirect URL authorized. Stumped me for a while but if you can get to your developer console and edit the credentials, you can add the local host location and the port number given to you in the error.
Here comes the final issue with that solution. In the quickstart, "port:0" was specified and it would randomize every time I added the most recent port number to the authorization. After adding 3-4 ports and having it change on me, I saw in the quickstart.py file that I needed to specify the port like so.
creds = flow.run_local_server(port=8080)
After changing that in the quickstart.py file and adding that to my "Authorized redirect URI's" in Developer Console, I was able to finish this step and move on.
I'm open to criticism if this is incorrect. I found those two errors were missing from every answer I came across. Hopefully this saves some people some time while trying to setup ezsheets to sync with Google Drive and Google Sheets.
For the import command to work you must first type python or python3 to your command prompt while inside the directory you want to import ezsheets Once you have the python prompt then import ezsheets command will work

Running an exe file on the azure

I am working on an azure web app and inside the web app, I use python code to run an exe file. The webapp recieves certain inputs (numbers) from the user and stores those inputs in in a text file. Afterwards, an exe file would run and read the inputs and generate another text file, called "results". The problem is that although the code works fine on my local computer, as soos as I put it on azure, the exe file does not get triggered by the following line of code:
subprocess.call('process.exe',cwd = case_directory.path, shell= True)
I even tried running the exe file on Azure manually from the Visual Studio Team Services (was Visual Studio Online) by "running from Console" option. It just did not do anything. I'd appreciate if anyone can help me.
Have you looked at using a WebJob to host\run your executable from? A WebJob can be virtually any kind of script or win executable. There are a number of ways to trigger your WebJob. You also get a lot of buit in monitoring and logging for free as well, via the Kudu interface.
#F.K I searched some information which may be helpful for you, please see below.
Accroding to the python document for subprocess module, Using shell=True can be a security hazard. Please see the warning under Frequently Used Arguments for details.
There is a comment in the article which gave a direction for the issue, please see the screenshot below.
However, normally, the recommended way to satisfy your needs is using Azure Queue & Blob Storage & Azure WebJobs to save the input file into a storage queue, and handling the files got from queue and save the result files into blob storage by a continuous webjob.

Can't auto-start Python program on Raspberry Pi due to Google Calendar

I have written a python tkinter program which runs on my Raspberry Pi, which does a number of things, including interfacing with my google calendar (read only access). I can navigate to the directory it is in and run it there - it works fine.
I would like the program to start at boot-up, so I added it to the autostart file in /etc/xdg/lxsession/LXDE, as per advice from the web. However it does not start at boot. So I try running the line of code I put in that file manually, and I get this.
(code I run) python /home/blahblah/MyScript.py
WARNING: Please configure OAuth 2.0
To make this sample run you will need to download the client_secrets.json file and save it at:
/home/blahblah/client_secrets.json
The thing is, that file DOES exist. But for some reason the google code doesn't realise this when I run the script from elsewhere.
How then can I get my script to run at bootup?
Figured this out now. It's tough, not knowing whether it's a Python, Linux or Google issue, but it was a Google one. I found that other people across the web have had issues with client_secrets.json as well, and the solution is to find where its location is stored in the Python code, and instead of just having the name of the file, include the path as well, like this.
CLIENT_SECRETS = '/home/blahblahblah/client_secrets.json'
Then it all works fine - calling it from another folder and it starting on startup. :)

Categories

Resources