I am using an application called Screamingfrog to scan my companies sites for broken images/links. Currently, the scans are ran on a remote PC desktop and are on a scheduler that scans each site daily and stores the scan results as .csv files to a Google Drive. We use a remote desktop because the scans significantly slow down my laptop, so I'm unable to use the scan locally on my laptop. What I want to do is some how change the path to use that specific Google Drive and pull the .csv files from there. Then I run a command in Pycharm that takes those files and puts them in a Google Sheet.
So to summarize I basically want to change the path to use a Google Drive instead of locally from my work laptop.
Thank you in advance to anyone trying to help.
I think you should look at the Google Drive API Documentation, it facilities pushing and pulling files from your storage.
Here is the link: https://developers.google.com/drive/api/v3/quickstart/python
I couldn't comment on your post because of the new user restrictions
Related
I want to copy the files from Google Drive to OneDrive using any APIs in Python. Google provides some APIs but I don't see anything related to copy the files to another cloud.
One way to achieve this is like download files from Google Drive using Google Drive API and again upload to OneDrive.
Please share some inputs if there is a way to achieve this.
If you are running both services on your desktop, you could just use python to move the files from one folder to another. See How to move a file in Python. By moving the file from your google drive folder to your onedrive folder, the services should automatically sync and upload the files.
(As an aside, another solution if you don't care how the problem gets solved a service like https://publist.app might be of use).
I was wondering if JupyterLab has an API that allows me to programmatically upload files from my local storage to the JupyterLab portal. Currently, I am able to manually select "Upload" through the UI, but I want to automate this.
I have searched their documentation but no luck. Any help would be appreciated. Also, I am using a chromebook (if that matters).
Thanks!!
Firstly, you can use python packages "requests" and "urllib" to upload files
https://stackoverflow.com/a/41915132/11845699
This method is actually the same as clicking the upload button, but the uploading speed is not very satisfying so I don't recommend it if you are uploading lots of files or some large files.
I don't know whether your JupyterLab server is managed by your administrator or yourself. In my case, I'm the administrator of the server in my lab. So I setup an NFS disk and mount it to a folder in the JupyterLab working directory. The users can access to this NFS disk via our local network or the internet. NFS disk is capable of transmitting lots of large files, which is much more efficient than the Jupyter upload button. I learned this from a speech of a TA in Berkeley https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub
I highly recommend this if you can contact the person who has access to the file system of your Jupyter server. If you don't use Linux, then Webdav is an alternative to NFS. Actually, anything that can give you access to a folder on a remote server is optional, such as Nextcloud or Pydio.
(If you can't ask the administrator to deploy such service, then just use the python packages)
So I'm trying to download this dataset: https://data.broadinstitute.org/bbbc/BBBC006/
and I have a python script that pulls the zip and extracts it locally. It worked on my local machine but each of these is 1GB+ and I don't want to store that locally. When I tried loading the 34 folders on a google cloud with like 50GB memory, it crashed around the 9th folder each time with a memoryerror.
Had a friend tell me I need to load them into storage first and then pull them in sets into a VM but idk much about cloud infrastructure and could use some pointers.
Specs:
34 folders (from website)
1gb+ each
zip files
on google cloud
Thanks!
I'm progressively adding images to a dropbox folder remotely which I then need to download on my raspberry pi 3.
The thing is I only need the latest uploaded image in that folder so that I can classify it remotely using some code deployed on my raspberry pi 3.
I don't know the dropbox api well so I don't know if there's any functionality to directly implement what I said above, so I'm trying to download the entire folder with all the images locally and then select the image that I want.
Dropbox api v2 says they added functionality to download entire folders as zip files but whenever I try to implement the code given in the api and save the file locally, the local zip files always says it's corrupt and can't be opened.
Does anyone know how this can be implemented in python ?
Edit: Or maybe shed light if there's a simpler way to download the latest uploaded image to a folder without explicitly changing the code with that specific image's name or link ?
https://www.dropbox.com/developers/documentation/http/documentation#files-download_zip
Start by getting the download working in a Linux terminal using CURL, then you can work your way up by making the HTTP request using Python Requests library. That way you can debug it systematically. Make sure there aren't any issues with file permissions on Dropbox or API tokens.
I have a Python 2.7 script that produces *.csv files. I'd like to run this Python script on a remote server and make the *.csv files publicly available to read.
Can this be done on Heroku? I've gone through the tutorial, but it seems to be geared towards people who want to create a whole web site.
If Heroku isn't the solution for me, what are the alternatives? I tried Google App Engine, but it requires Python 2.5 and won't work with 2.7.
MORE DETAILS:
I have a Python 2.7 script that analyzes all stocks that trade on the AMEX, NYSE, and NASDAQ exchanges and writes the output into *.csv files that can be read with a spreadsheet application. I want the script to automatically run every night on a remote server, and I want the *.csv files it produces to be publicly available.
Web hosting
Ok so you should be able to achieve what you need pretty simply. There are many webhosts that have python support. Your requirement is pretty simple. Just upload your python scripts to the web server. Then you can schedule a cron job to call your script at a specific time every day. Your script will run as scheduled and should save the csv files in the web servers document root. Keep in mind you don't need to your script to run in the web server, just on the same server. The web server will just serve your static csv files for you once you place them in the webserver's document root.
Desktop with dropbox
Another maybe easier option is take any desktop and schedule your python script to run on it each night you can do this in windows, Linux, Mac. Also install dropbox it gives you 2GB free online storage. Then your scripts just have to save the csv fies to the Dropbox/Public directory. When they do this they will automatically get synced to the dropbox servers and can be accessed through your public url like any other web page on the internet. You get 2GB for free which should be more then enough for a whole bunch of CSV files.