I am not a Python programmer, or a YouTube API specialist. This is also my first StackOverflow post. I am OK with Linux, programming generally, and bash scripting.
I am currently testing the youtube-upload program. I have developed a bash script, which determines what videos are not already uploaded by comparing md5 checksums, and creates a logged bash script which then uploads new videos with descriptions, tags, and titles using the youtube-upload Python app. The upload process is working well.
However, the videos uploaded do not take on the default YouTube channel settings, including the default playlist allocation and most advanced settings (some of which are available when using a browser to upload). This means I have to browse the uploaded videos one by one to change change the playlist the video belongs to, and most settings in the "Advanced Settings" tab for the video, which is time consuming for hundreds of short videos.
I could not find any indication on Google that the V3 YouTube API exposes playlist allocation or many advanced settings.
I have the following questions:
Does anyone know whether playlist allocation and advanced settings are currently exposed by the YouTube API? If not, will they ever be?
If the API does expose the control, would anyone know how I can use bash (or Python - which seems relatively easy to learn) change the playlist allocation and advanced settings for uploaded videos, or new videos being uploaded by YouTube-upload?
If the API does not expose the control, is there another method I am not aware of where I can create code that controls the browser (FireFox) to change these settings for all uploaded videos?
It seems to me that if the YouTube API does not expose these settings, then no process performing YouTube uploads can be fully automated. My goal is to fully automate the entire process, including playlist and advanced settings selection.
The API to link a resource to a playlist is PlaylistItems at https://developers.google.com/youtube/v3/docs/playlistItems.
You can follow the list example to see/get the playlist Ids (if you forgot it...). Then use insert method to add your resource to a playlist.
Extract from insert Reference:
You must specify a value for these properties:
snippet.playlistId
snippet.resourceId
You can set values for these properties:
snippet.playlistId
snippet.position
snippet.resourceId
contentDetails.note
contentDetails.startAt
contentDetails.endAt
you can try this. I am not a code master neither but it works
give it a shot ytb_up based selenium
https://github.com/wanghaisheng/ytb-up
features YOU MAY NEED
proxy support
auto detect whether need a proxy
2. cookie support
for those multiple channels under same google account
3. schedule time publish
you can explictly specify a date and time for each video or you can set publish policy and daily public count,for example,daily count is 4,you got 5 videos,then first 4 will be published 1 day after the upload date ,the other 1 will be 2 days after the upload date
4. fix google account verify
Related
I would like to automate this process of viewing logs in dashboard and typing the information (Total messages sent in a time period, total errors, CPU usage, memory usage), this task is very time consuming at the moment.
The info is gathered from mulesoft anypoint platform. I'm currently thinking of a way to extract all of the data using python webscraping but I don't know how to use it perfectly.
You'll find here a screenshot of the website i'm trying to get the data off of, you can choose to see the logs specific to a certain time and date. My question is, do I start learning python webscrapping or is there another way of doing things that I am just unaware of ?
Logs website example
It doesn't make any sense to use web scrapping. All services in Anypoint Platform have a REST API. Most of them are documented at https://anypoint.mulesoft.com/exchange/portals/anypoint-platform/. Scrapping may broke with any minor change to the UI. The REST API is stable.
The screenshot seems to be from Anypoint Monitoring. I see in the catalog Anypoint Monitoring Archive API. I'm not sure if the API for getting Monitoring Dashboards data is documented. You could alternatively use the older CloudHub Dashboards API. It is probably not exactly the same but it will approximate.
I would like to make a web application where a user can:
click a button on a webpage to create a **random video (with moviepy)
play that video back on the webpage
click the button again to create a new **random video that replaces the existing one
** The random video will be created with a python/moviepy script that downloads a bunch of random video clips from the internet to a directory on my computer. It then compiles them into one video (.mp4 file).
I've done the python script bit already which successfully creates the video file.
To make the web app bit I have been recommended django and that's where I'm stuck!
So far I have installed django and got to grips with the basics .. I have a home page that says "Hello world".
My question is where do I go from here?
How do I connect my python/moviepy script with django?
What steps, apps, components etc, within django should I look into to make this thing?
I'm new to coding and looking for some guidance.
Thanks!
If you create a model with FileField then your moviepy script should upload videos into that field, that field can save the video in a specified directory in MEDIA_ROOT(you can store your post based on date) then that field will store the URL to it (you need to specify MEDIA_URL in settings.py). You can define some sort of IDs to them, if video privacy is not important, then you can use the model IDs. These IDs can be obtained trough Path converters .
At the client-side, javascript is needed. Simply running the script in view is possible, but then the user will need to wait for the response (and the browser should run into time-out).
You should look at server-sent events. With Vue.js you can easily display a loading element while waiting for the event (a video to be generated) and then download and switch to the video (see Django CRUD application tutorials). The python script can run asynchronously (you call it in the view).
It is a lot, I know.
Actually I'm going to learn these now, sorry for mistakes.
As I see, Django is for complex sites, you should look for Flask instead.
(I am learning Django and I know nothing about Flask, so I`ll go with it)
Here is the needed setup:
define the urlpatterns for handling the URLs
create a model for storing your video
create a django template for your page (html)
define a view for rendering out the template (passing the video)
maybe some css to design it
You can run your video generator(in the view) at every reload and override existing video(in this case you don't even need a model) or you can save the generated videos and capture the IDs in the URLs (for example: https://yoursite.com/1), in this case, the videos remain shareable.
If you go with the first option and sharing videos is not important for you then you could write a simple html page with a video and a button. The button can trigger a javascript function to run the video generator python script and refresh the page(the video is overridden), you may need to wait to the script otherwise the old video can load.
Restful API is a more advanced way to refresh the video, without reloading the page.
After these, you can deploy your page using an Apache server for example.
I tried to give you some guidelines (I am learning this on my own)
Hope that it helped :) There are tutorials for these.
I'm trying to automatically upload JPG photo files from a particular directory on my computer to a particular album on Google Photos. I'd like the photos to periodically get pushed up to Google Photos (every day or so is frequent enough). Google Photos Backup almost does what I want, but it just uploads the files -- it doesn't put them into a particular [pre-existing] album on Google Photos. It's possible that I can somehow use Google Drive and a simple cron job for this, although I don't know how. I am also considering using the Picassa Web Albums API, but that feels overkill and I'd like to avoid that work unless it's necessary. Are there any straightforward solutions to this?
As you said that Google Photo Backup do the (upload) job, in my opinion the best way then is to use directly a Google Apps Script stored inside your Google Drive (running periodicaly) in order to push each new detected pictures inside a particular album.
If you need relative documentation, you may take a look at the album class documentation and also https://developers.google.com/apps-script/
If you need to use an other language to do the job (python, js, etc...) please specify which one and give us also more precision. (mac / windows / linux)
Use IFTTT for this. Google Photos channel perfectly fits for this purpose. https://ifttt.com/applets/DMgPS2uZ-back-up-new-android-photos-you-take-to-google-photos
What is the best way to monitor website traffic for a Google App Engine hosted website?
It's fairly trivial to put some code in each page handler to record each page request to the datastore, and now (thanks stackoverflow) I have the code to log the referring site.
There's another question on logging traffic using the datastore, but it doesn't consider other options (if there are any).
My concern is that the datastore is expensive. Is there another way? Do people typically implement traffic monitoring, or am I being over-zealous?
If I do implement traffic monitoring via the datastore, what fields are recommended to capture? What's good and/or common practise?
I'd go with: time-stamp; page; referer; IP address; username (if logged in). Any other suggestions?
All of the items you mention are already logged by the built-in App Engine logger. Why do you need to duplicate that? You can download the logs at regular intervals for analysis if you need.
People usually use Google Analytics (or something similar) as it does client-side tracking and gives more insight then server-side tracking.
If you only need server-side tracking then analysing logs should be enough. The problem with Log API is that it can be expensive because it does not do real querying: for every log search it goes thorough all logs (within range).
You might want to look at Mache, a tool that exports all GAE logs to Google BigQuery which has proper query functionality.
Another option would be to download logs and analyse them with a local tools. GAE logs are in Apache format so there are plenty of tools available.
You can use the logging module and that comes with a separate quota limit.
7 MBytes spanning 69 days (1% of the Retention limit)
I don't know what the limit is but that's a line from my app so it seems to be quite large.
You can then add to the log with
logging.debug("something to store")
if it does not already come with what you need, then read it out locally with:
appcfg.py --num_days=0 request_logs appname/ output.txt
Anything you write out via System.err.println (or the python equivalent) will automatically be appended to the app engine log. So, for example, you can create you own logging format, put println's on all your pages, and then download the log and grep for that format. So for example, if this is your format:
MYLOG:url:userid:urlparams
then download the log and pipe it through grep ^MYLOG and it would give you all the traffic for your site.
Im thinking of setting up a Google App that simply displays an RSS or Atom feed. The idea is that every once in a while (a cron job or at the push of a magic button) the feed is read and copied into the apps internal data, ready to be viewed. This would be done in Python.
I found this page that seems to explain what I want to do. But that is assuming Im using some of the other Google products as it relies on the Google API.
My idea was more in line that added some new content, hosted it locally on my machine, went to the Google App administration panel, pushed a button and my (locally hosted) feed was read and copied.
My questions now are:
Is the RSS (or Atom, one is enough) format specified enough to handle add/edit/delete?
Are there any flavors or such I should worry about?
Have this been done before? Would save me some work.
One option is to use the universal feed parser library, which will take care of most of these issues for you. Another option would be to use a PubSubHubbub-powered service such as Superfeedr, which will POST updates to you in a pre-sanitized form, eliminating most of your polling and parsing issues.
What about using an additional library, like for instance Feedparser?