Accessing Google Keep data through script? - python

I am trying to create a script that get the data from a google keep list I was thinking Google Takeout might do part of what I want but I cannot find a API to automate the downloads. Does anyone know a way to grab this data via script (python/bash) so that I can easily extract what I need?

I am not sure if it is allowed or not, but you could login via a BeautifulSoup session and navigate to the site you wish to parse.
I've written a quite similar script for Python, you can find it at github, i thinkt it's pretty self-explanatory but if you should require any more help feel free to ask.

You could use selenium library for that.
Used the framework to scrape the keep.google.com webpage for all the notes and export them to a csv file
This Might be helpful, i made the script to backup my notes to my computer
https://github.com/darshkpatel/GoogleKeep_Backup

There is no API for Google Keep at this time. I don't think your going to be able automate Google Takeout either the best you will be able to do would be run it manually then create your own application to import it were ever it is you want to import it to.

Here is an automated solution for this question: a link!
Or just execute these commands in the terminal:
git clone https://github.com/Dmitry9/exportKeep.git;
cd exportKeep;
npm install;
npm run scrape;
After all dependencies installed (could take a minute or so) chrome instance will navigate to the sign-in page. After posting credentials it will scroll to the bottom of the window to force the browser to load all of the notes inside DOM. Inspecting the output of the terminal you will find a path to the saved JSON file.

In the meanwhile there is an API, see here: https://developers.google.com/keep/api/reference/rest
Also, there is a python library that implements this API (I'm not the author of the library): https://github.com/kiwiz/gkeepapi

Related

How can I get the token-sheets.pickle and token-drive.pickle files to use ezsheets for Google Sheets?

I am trying to set up ezsheets for the use with Google Sheets. I followed the instructions from here https://ezsheets.readthedocs.io/en/latest/ and here https://automatetheboringstuff.com/2e/chapter14/
The set up process works quite differently on my computer: Somehow I could download the credentials-sheets.json. I need to download the token-sheets.pickle and token-drive.pickle files. When I run import ezsheets, no browser window is opended as described in the set up instructions. Nothing happens.
Is there another way to download both files?
I followed the steps you referenced and managed to generate the files, but I also encountered the same issue before figuring out the cause. The problem is that there are a few possible causes and the script silently fails without telling you exactly what happened.
Here are a few suggestions:
First off you need to configure your OAuth Consent Screen. You won't be able to create the credentials without it.
Make sure that you have the right credentials file. To generate it you have to go to the Credentials page in the Cloud Console. The docs say that you need an OAuth Client ID. Make sure that you have chosen the correct app at the top.
Then you will be prompted to choose an application type. According to the docs you shared the type should be "Other", but this is no longer available so "Desktop app" is the best equivalent if you're just running a local script.
After that you can just choose a name and create the credentials. You will be prompted to download the file afterwards.
Check that the credentials-sheets.json file has that exact name.
Make sure that the credentials-sheets.json file is located in the same directory where you're running your python script file or console commands.
Check that you've enabled both the Sheets and Drive API in your GCP Project.
Python will try to setup a temporary server on http://localhost:8080/ to retrieve the pickle files. If another application is using port 8080 then it will also fail. In my case a previously failed Python script was hanging on to that port.
To find and close the processes using port 8080 you can refer to this answer for Linux/Mac or this other answer for Windows. Just make sure that the process is not something you're currently using.
I just used the single import ezsheets command to get the files so after getting the token-sheets.pickle I had to run it again to get the token-drive.pickle, but after that the library should detect that you already have the files.

Github scraping for all javascript files

I need to extract all available javascript files over all the projects available on github using python.I looked for an API in github and I found this :
https://developer.github.com/v3/
I don't understand what kind of requests do I have to send and how do I compose the URL.I would prefer not depending on another 3rd party API if possible.
Please guide me in the right direction,Any help would be appreciated!!
To gather the files you can use this in your python script:
import os
os.system("curl -o https://github.com/file.js")
Replace the URL with the individual file name, or in your case a variable in your loop to grab all files from a repo. You will need to repeat this for each org/user/repo/etc
Download remote files using curl
Running shell commands from python

How can I remotely update a python script

How can I update a python script remotely. I have a program which I would like to share, however it will be frequently updates, therefore I want to be able to remotely update it so that the users do not have to re-install it every day. I have already searched StackOverflow for an answer but I did not find anything I could understand. Any help will be mentioned in the projects credit!
A very good solution would be to build a web app. You can use django, bottle or flask for example.
Your users just connect to your url with a browser. You are in complete control of the code, and can update whenever you want without any action on their part.
They also do not need to install anything in the first place, and browser nowadays provide a lot of flexibility and dynamic content.

using google chrome's downloader - python 3

I'm trying to download files using python 3. I use webbrowser.open_new(url) to open file locations. some files are downloaded automatically by chrome's downloader, and some are just opened in a chorme window. How can I choose between the options?
You cannot influence that, not with the Python webbrowser module.
What is downloaded and what is displayed in the browser is a preference set in the browser itself.
You could try and set those preferences using Selenium, see Set chrome.prefs with python binding for selenium in chromedriver. This is not going to be simple; you'll need to figure out the exact preference strings to alter. Perhaps the Chromium prefences list can be used as a guide there.
The Web server on which the file is hosted sends a header that suggests to the browser how it might handle the file, and the user's preferences hold some sway as well. You likely won't be able to override it easily.
You can avoid this by not using a Web browser from Python. urllib2 or better yet, the third-party requests module is a much easier way to talk to the Web.

Python Google Checkout Notification API

I am attempting to work with Google Checkout on Google App Engine. Currently, I am writing everything in Python and have the checkout process working. I am having some difficulty getting the notifications of processed orders functioning. I've been searching for Python examples, but thus far have been unsuccessful. Programming directly off of the documentation provided by Google, I have not been able to get my notifications working either. Would anyone happen to have a framework/demo Google Checkout Notification example in Python?
The chippyshop source has a nice example of a python Google Checkout implementation that should get you moving in the right direction.
UPDATE: I actually ended up needing to do this myself so I extended the example linked above and turned it into a reusable RequestHandler for use on GAE.
You can find the source and examples on github which should serve as a good starting point for any others following this path.

Categories

Resources