Automatically upload photos to a particular Google Photos album - python

I'm trying to automatically upload JPG photo files from a particular directory on my computer to a particular album on Google Photos. I'd like the photos to periodically get pushed up to Google Photos (every day or so is frequent enough). Google Photos Backup almost does what I want, but it just uploads the files -- it doesn't put them into a particular [pre-existing] album on Google Photos. It's possible that I can somehow use Google Drive and a simple cron job for this, although I don't know how. I am also considering using the Picassa Web Albums API, but that feels overkill and I'd like to avoid that work unless it's necessary. Are there any straightforward solutions to this?

As you said that Google Photo Backup do the (upload) job, in my opinion the best way then is to use directly a Google Apps Script stored inside your Google Drive (running periodicaly) in order to push each new detected pictures inside a particular album.
If you need relative documentation, you may take a look at the album class documentation and also https://developers.google.com/apps-script/
If you need to use an other language to do the job (python, js, etc...) please specify which one and give us also more precision. (mac / windows / linux)

Use IFTTT for this. Google Photos channel perfectly fits for this purpose. https://ifttt.com/applets/DMgPS2uZ-back-up-new-android-photos-you-take-to-google-photos

Related

How do I work with large data with a web application?

I recently am wrapping up a personal project that involved using flask, python, and pythonanywhere. I learned a lot and now I have some new ideas for personal projects.
My next project involves changing video files and converting them into other file types for example JPGs. when I drafted up how my system could work I quikly realized that the current platform I am using for web application hosting, meaning pythonanywhere, will be too expensive and perhaps even too slow it since I will be working with large files.
I searched around and found AWS S3 for file storage but I am having trouble finding out how I can operate on that data to do my conversions in python. I definitely don't want to download from S3 operate, on the data in Python anywhere, and then reupload the converted files to a bucket. The project will be available for use on the internet so I am trying to make it as robust and scalable as possible.
I found it hard to even word this question on what to ask as I am not too sure if I am even asking the right questions. I guess I am looking for a way to manipulate large data files, preferably in python, without having to work with the data locally if that makes any sense.
I am open to learning new technologies if that is the case and am looking for some direction on how I might achieve this personal project.
Have you looked into AWS Elastic Transcoder?
Amazon Elastic Transcoder lets you convert media files that you have stored in Amazon Simple Storage Service (Amazon S3) into media files in the formats required by consumer playback devices. For example, you can convert large, high-quality digital media files into formats that users can play back on mobile devices, tablets, web browsers, and connected televisions.
Like all things AWS, there are SDKs (e.g. Python SDK) that allow you to programmatically access the service.

Setting Up S3 with Heroku and Django with Images

so I currently have my static files (js and css) just being stored on Heroku which is no biggie. However, I have objects that I need to store multiple images too and be able to get those images on request. How would I store a reference to those images?
I was planning to use a S3 Direct File Upload using these steps on Heroku here. Is this also going to be the best way for me to do so?
Thank you in advance.
I don't think setting up static (css,js,etc..) or media (images, videos) to be stored on S3 has anything to do with Heroku or where you deploy. Rather, its just making sure Django knows where to save the files, and where to fetch them. I would definitely not follow that link, because it seems confusing and not helpful when working with Django.
This tutorial has really helped me, as it will show you how to set all of that up. I have gone through these steps and can confirm it does the trick. https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
While I've gone this route in the past, I've recently opted to use Digital Ocean's one-click app - Dokku. It's based on Herokuish. I then use Dokku's persistent storage to take advantage of the 25 gigs of storage on DO's smallest, $5/month, plan. I wrote a guide to this here.

youtube-upload and google youtube api - deeper control of upload process

I am not a Python programmer, or a YouTube API specialist. This is also my first StackOverflow post. I am OK with Linux, programming generally, and bash scripting.
I am currently testing the youtube-upload program. I have developed a bash script, which determines what videos are not already uploaded by comparing md5 checksums, and creates a logged bash script which then uploads new videos with descriptions, tags, and titles using the youtube-upload Python app. The upload process is working well.
However, the videos uploaded do not take on the default YouTube channel settings, including the default playlist allocation and most advanced settings (some of which are available when using a browser to upload). This means I have to browse the uploaded videos one by one to change change the playlist the video belongs to, and most settings in the "Advanced Settings" tab for the video, which is time consuming for hundreds of short videos.
I could not find any indication on Google that the V3 YouTube API exposes playlist allocation or many advanced settings.
I have the following questions:
Does anyone know whether playlist allocation and advanced settings are currently exposed by the YouTube API? If not, will they ever be?
If the API does expose the control, would anyone know how I can use bash (or Python - which seems relatively easy to learn) change the playlist allocation and advanced settings for uploaded videos, or new videos being uploaded by YouTube-upload?
If the API does not expose the control, is there another method I am not aware of where I can create code that controls the browser (FireFox) to change these settings for all uploaded videos?
It seems to me that if the YouTube API does not expose these settings, then no process performing YouTube uploads can be fully automated. My goal is to fully automate the entire process, including playlist and advanced settings selection.
The API to link a resource to a playlist is PlaylistItems at https://developers.google.com/youtube/v3/docs/playlistItems.
You can follow the list example to see/get the playlist Ids (if you forgot it...). Then use insert method to add your resource to a playlist.
Extract from insert Reference:
You must specify a value for these properties:
snippet.playlistId
snippet.resourceId
You can set values for these properties:
snippet.playlistId
snippet.position
snippet.resourceId
contentDetails.note
contentDetails.startAt
contentDetails.endAt
you can try this. I am not a code master neither but it works
give it a shot ytb_up based selenium
https://github.com/wanghaisheng/ytb-up
features YOU MAY NEED
proxy support
auto detect whether need a proxy
2. cookie support
for those multiple channels under same google account
3. schedule time publish
you can explictly specify a date and time for each video or you can set publish policy and daily public count,for example,daily count is 4,you got 5 videos,then first 4 will be published 1 day after the upload date ,the other 1 will be 2 days after the upload date
4. fix google account verify

Store images temporary in Google App Engine?

I'm writing an app with Python, which will check for updates on a website(let's call it A) every 2 hours, if there are new posts, it will download the images in the post and post them to another website(call it B), then delete those images.
Site B provide API for upload images with description, which is like:
upload(image_path, description), where image_path is the path of the image on your computer.
Now I've finished the app, and I'm trying to make it run on Google App Engine(because my computer won't run 7x24), but it seems that GAE won't let you write files on its file system.
How can I solve this problem? Or are there other choices for free Python hosting and providing "cron job" feature?
GAE has a BlobStore API, which can work pretty much as a file storage, but probably it's not what you whant. Actually, the right answer depends on what kind of API you're using - it may support file-like objects, so you could pass urllib response object, or accept URLs, or tons of other interesting features
You shouldn't need to use temporary storage at all - just download the image with urlfetch into memory, then use another urlfetch to upload it to the destination site.

Using Google App Engine to display a Rss/Atom feed

Im thinking of setting up a Google App that simply displays an RSS or Atom feed. The idea is that every once in a while (a cron job or at the push of a magic button) the feed is read and copied into the apps internal data, ready to be viewed. This would be done in Python.
I found this page that seems to explain what I want to do. But that is assuming Im using some of the other Google products as it relies on the Google API.
My idea was more in line that added some new content, hosted it locally on my machine, went to the Google App administration panel, pushed a button and my (locally hosted) feed was read and copied.
My questions now are:
Is the RSS (or Atom, one is enough) format specified enough to handle add/edit/delete?
Are there any flavors or such I should worry about?
Have this been done before? Would save me some work.
One option is to use the universal feed parser library, which will take care of most of these issues for you. Another option would be to use a PubSubHubbub-powered service such as Superfeedr, which will POST updates to you in a pre-sanitized form, eliminating most of your polling and parsing issues.
What about using an additional library, like for instance Feedparser?

Categories

Resources