Download all files in a Sharepoint folder using Python? - python

I have a Sharepoint folder, where every week several files would be dumped. Using python, I would like to download all the files from to a local folder location, do some transformations and upload a single consolidated file back to different location on Sharepoint. But whatever calls I am making, returns an empty JSON file.
Here is the code I have tried till now:
import sharepy
sess = sharepy.connect(site='company.sharepoint.com', username='username', password='password')
site = r'https://company.sharepoint.com/'
path = r'/some-site/Documents/Folder1/Folder2/Folder3/'
r = sess.get(site + """_api/web/GetFolderByServerRelativeUrl('"""+path+"""')/Files""")
r
r.json()['d']['results']
r object is a <Response [200]>.I want to download all the files in Folder3, but it returns empty. If I goto the Sharepoint website, I can see all the files using the same username used to create sess object.
I am getting path variable from the bottom of the details pane. I have tried many other options than GetFolderByServerRelativeUrl, but couldn't seems to be working.
I am new to Python and have no clue about Rest APIs.
Thank you for your help.

You need to get a list of files
# Get list of all files and folders in library
files = s.get("{}/_api/web/lists/GetByTitle('{}')/items?$select=FileLeafRef,FileRef"
.format(site, library)).json()["d"]["results"]
for the complete solution, check this:
https://github.com/JonathanHolvey/sharepy/issues/5

Related

How to copy and paste a file to a new folder in google drive using python? [duplicate]

I wrote a short function in Google Apps script that can make a copy of a specific file that is stored on Google Drive. The purpose of it is that this file is a template and every time I want to create a new document for work I make a copy of this template and just change the title of the document. The code that I wrote to make a copy of the file and store it in the specific folder that I want is very simple:
function copyFile() {
var file = DriveApp.getFileById("############################################");
var folder = DriveApp.getFolderById("############################");
var filename = "Copy of Template";
file.makeCopy(filename, folder);
}
This function takes a specific file, based on ID and a specific folder based on ID and puts the copy entitles "Copy of Template" into that folder.
I have been searching all over and I cannot seem to find this. Is there a way to do the exact same thing, but using Python instead? Or, at the very least is there a way to have Python somehow call that function to run this function? I need this to be done in Python because I am writing a script that does many functions at once whenever I start a new project for work, such as creating a new document from template in Google Drive as well as other things that are not related to Google Drive at all and they can therefore not be done in Google Apps Script.
There are a few tutorials around the web that give partial answers. Here is a step-by-step guide of what you need to do.
Open Command prompt and type (without the quotes) "pip install PyDrive"
Follow the instructions here by step one - https://developers.google.com/drive/v3/web/quickstart/python to set up an account
When that is done, click on Download JSON and a file will be downloaded. Make sure to rename that to client_secrets.json, not client_secret.json as the Quick Start says to do.
Next, make sure to put that file in the same directory as your python script. If you are running the script from a console, that directory might be your username directory.
I assume that you already know the folder id that you are placing this file in and file id that you are copying. If you don't know it, there are tutorials of how to find it using python or you can open it up in Docs and it will be in the URL of the file. Basically enter the ID of the folder and the ID of the file and when you run this script it will make a copy of the chosen file and place it in the chosen folder.
One thing to note is that while running, your browser window will open up and ask for permission, just click accept and then the script will complete.
In order for this to work you might have to enable the Google Drive API, which is in the API's section.
Python Script:
## Create a new Document in Google Drive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
gauth = GoogleAuth()
gauth.LocalWebserverAuth()
drive = GoogleDrive(gauth)
folder = "########"
title = "Copy of my other file"
file = "############"
drive.auth.service.files().copy(fileId=file,
body={"parents": [{"kind": "drive#fileLink",
"id": folder}], 'title': title}).execute()
From https://developers.google.com/drive/v2/reference/files/copy
from apiclient import errors
# ...
def copy_file(service, origin_file_id, copy_title):
"""Copy an existing file.
Args:
service: Drive API service instance.
origin_file_id: ID of the origin file to copy.
copy_title: Title of the copy.
Returns:
The copied file if successful, None otherwise.
"""
copied_file = {'title': copy_title}
try:
return service.files().copy(
fileId=origin_file_id, body=copied_file).execute()
except errors.HttpError, error:
print 'An error occurred: %s' % error
return None
With API v3:
Copy file to a directory with different name.
service.files().copy(fileId='PutFileIDHere', body={"parents": ['ParentFolderID'], 'name': 'NewFileName'} ).execute()
For me, the answer of #Rashi worked with a small modification.
instead of:
'name': 'NewFileName'
this worked:
'title': 'NewFileName'

Iterate over files in databricks Repos

I would like to iterate over some files in a folder that has its path in databricks Repos.
How would one do this? I don't seem to be able to access the files in Repos
I have added a picture that shows what folders i would like to access (the dbrks & sql folders)
Thanks :)
Image of the repo folder hierarchy
You can read files from repo folders. The path is /mnt/repos/, this is the top folder when opening the repo window. You can then iterate yourself over these files.
Whenever you find the file you want you can read it with (for example) Spark. Example if you want to read a CSV file.
spark.read.format("csv").load(
path, header=True, inferSchema=True, delimiter=";"
)
If you just want to list files in the repositories, then you can use the list command of Workspace REST API. Using it you can implement recursive listing of files. The actual implementation would different, based on your requirements, like, if you need to generate a list of full paths vs. list with subdirectories, etc. This could be something like this (not tested):
import requests
my_pat = "generated personal access token"
workspace_url = "https://name-of-workspace"
def list_files(base_path: str):
lst = requests.request(method='get',
url=f"{workspace_url}/api/2.0/workspace/list",
headers={"Authentication": f"Bearer {my_pat}",
json={"path": base_path}).json()["objects"]
results = []
for i in lst:
if i["object_type"] == "DIRECTORY" or i["object_type"] == "REPO":
results.extend(list_files(i["path"]))
else:
results.append(i["path"])
return results
all_files = list_files("/Repos/<my-initial-folder")
But if you want to read a content of the files in the repository, then you need to use so-called Arbitrary Files support that is available since DBR 8.4.

Python API for Github, getting contents in specific directory for specific branch not returning all content

Using the PyGithub API, I am attempting to retrieve all contents from a specific folder from a specific branch of a repository hosted with Github. I can't share the actual repository or specifics regarding the data, but the code I am using is this:
import github
import json
import requests
import base64
from collections import namedtuple
Package = namedtuple('Package', 'name version')
# Parameters
gh_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
header = {"Authorization": f"token {gh_token}"}
gh_hostname = 'devtopia.xxx.com'
gh = github.Github(base_url=f'https://{gh_hostname}/api/v3', login_or_token = gh_token)
repo_name = "xxxxxxxxx/SupportFiles"
conda_meta = "xxxxxxx/bin/Python/envs/xxxxxx-xx/conda-meta"
repo = gh.get_repo(repo_name)
def parse_conda_meta(branch):
package_list = []
meta_contents = repo.get_contents(conda_meta, ref=branch) #<< Returns less files than expected for
# a specified branch "xxx/release/3.2.0",
# returns expected number of files for
# "master" branch.
for i, pkg in enumerate(meta_contents):
if ".json" in pkg.name: # filter for JSON files
print(i, pkg.name)
# Need to use GitHub Data API (REST) blobs instead of easier
# `github` with `pkg.decoded_content` here because that method
# only works with files <= 1MB whereas Data API allows for
# reading files <= 100MB.
resp = requests.get(f"https://devtopia.xxxx.com/api/v3/repos/xxxxxxxxx/SupportFiles/git/blobs/{pkg.sha}?ref={branch}", headers=header)
pkg_cont = json.loads(base64.b64decode(json.loads(resp.content)["content"]))
package_list.append(Package(pkg_cont['name'], pkg_cont['version']))
else:
print('>>', i, pkg.name)
return package_list
if __name__ == "__main__":
pkgs = parse_conda_meta("xxx/release/3.2.0")
print(pkgs)
print(len(pkgs))
For some reason that I can't get to the bottom of, I am not getting the correct number of files returned by repo.get_contents(conda_meta, ref=branch). For the branch that I am specifying, when that branch is checked out I am seeing 186 files in the conda-meta folder. However, repo.get_contents(conda_meta, ref=branch) returns only 182, I am missing four JSON files.
Is there some limitation to repo.get_contents that I'm not aware of? I've been reading the doc but can't find anything that hints at the problem I am having. There is one bit about it only handling files up to 1mb, but I am seeing files larger than this returned (e.x: python is 1.204mb and is returned in the list of files). I believe this just applies to reading file content over 1mb, which I deal with by using the GitHub Data API (REST) further downstream. Is there something I'm doing wrong here?
Thanks for reading, any help with this is much appreciated!
Update with solution!
The Problem:
After some more digging, I have found the problem's cause. It's not to do with the code above or repo.get_contents(conda_meta, ref=branch) specifically. It is actually a unix/windows clash that was mistakenly introduced into our repository for this specific branch "xxx/release/3.2.0" but not present in others.
So what was the problem? NTFS (and Windows more broadly) by default is case insensitive, but Git is from a Unix world and is case-sensitive by default
We inadvertently created two folders for Python in the bin directory of the conda_meta path (xxxxxx/bin/), one folder called "Python" and one called "python" (note the lower-case). When pulling the repository locally, only the "Python" folder shows up containing all 168 files. On GitHub, however, the path with "Python" contains 182 files while the path with "python" contains the remaining 4 files.
The Solution:
Solution is to add a conda_meta_folders parameter that takes a list of paths to parse_conda_meta and search each directory. There might be a slicker solution though, I'm looking into whether it is possible to do something like git config core.ignorecase true with the PyGithub API. Does anyone know if it is possible to have PyGithub honor this or be configured for this?

How to delete multiple files at once using Google Drive API

I'm developing a python script that will upload files to a specific folder in my drive, as I come to notice, the drive api provides an excellent implementation for that, but I did encountered one problem, how do I delete multiple files at once?
I tried grabbing the files I want from the drive and organize their Id's but no luck there... (a snippet below)
dir_id = "my folder Id"
file_id = "avoid deleting this file"
dFiles = []
query = ""
#will return a list of all the files in the folder
children = service.files().list(q="'"+dir_id+"' in parents").execute()
for i in children["items"]:
print "appending "+i["title"]
if i["id"] != file_id:
#two format options I tried..
dFiles.append(i["id"]) # will show as array of id's ["id1","id2"...]
query +=i["id"]+", " #will show in this format "id1, id2,..."
query = query[:-2] #to remove the finished ',' in the string
#tried both the query and str(dFiles) as arg but no luck...
service.files().delete(fileId=query).execute()
Is it possible to delete selected files (I don't see why it wouldn't be possible, after all, it's a basic operation)?
Thanks in advance!
You can batch multiple Drive API requests together. Something like this should work using the Python API Client Library:
def delete_file(request_id, response, exception):
if exception is not None:
# Do something with the exception
pass
else:
# Do something with the response
pass
batch = service.new_batch_http_request(callback=delete_file)
for file in children["items"]:
batch.add(service.files().delete(fileId=file["id"]))
batch.execute(http=http)
If you delete or trash a folder, it will recursively delete/trash all of the files contained in that folder. Therefore, your code can be vastly simplified:
dir_id = "my folder Id"
file_id = "avoid deleting this file"
service.files().update(fileId=file_id, addParents="root", removeParents=dir_id).execute()
service.files().delete(fileId=dir_id).execute()
This will first move the file you want to keep out of the folder (and into "My Drive") and then delete the folder.
Beware: if you call delete() instead of trash(), the folder and all the files within it will be permanently deleted and there is no way to recover them! So be very careful when using this method with a folder...

Compare archiwum.rar content and extracted data from .rar in the folder on Windows 7

Does anyone know how to compare amount of files and size of the files in archiwum.rar and its extracted content in the folder?
The reason I want to do this, is that server I'am working on has been restarted couple of times during extraction and I am not sure, if all the files has been extracted correctly.
.rar files are more then 100GB's each and server is not that fast.
Any ideas?
ps. if the solution would be some code instead standalone program, my preference is Python.
Thanks
In Python you can use RarFile module. The usage is similar to build-in module ZipFile.
import rarfile
import os.path
extracted_dir_name = "samples/sample" # Directory with extracted files
file = rarfile.RarFile("samples/sample.rar", "r")
# list file information
for info in file.infolist():
print info.filename, info.date_time, info.file_size
# Compare with extracted file here
extracted_file = os.path.join(extracted_dir_name, info.filename)
if info.file_size != os.path.getsize(extracted_file):
print "Different size!"

Categories

Resources