i am trying to create a public folder in google drive and get in return a link for shareing
this:
def createfolder(foldername,service):
new_role='reader'
types='anyone'
# create folder
file_metadata = {
'name': '{}'.format(foldername),
'mimeType': 'application/vnd.google-apps.folder',
'role': 'reader',
'type': 'anyone',
}
file = service.files().create(body=file_metadata,
fields='id,webViewLink').execute()
print('Folder ID: %s' % file.get('webViewLink'))
return file.get('id')
i got this far
it creates and folder and prints the link
tryd to add the fields in to the body role and type and set it to reader / anyone but this not working
role type fields seem to be ignored
is there a way to do this on create or do i have to change the permission after i create it?
You have to call Permissions.create:
File permissions are handled via Permissions, not through Files methods.
If you check the Files resource representation, you'll notice that some fields, like name or mimeType, have the word writable under Notes. This means you can modify these fields directly using this resource (Files) methods.
If you check the property permissions, though, you'll notice there's no writable there. This means permissions cannot be updated directly, using Files methods. You have to use Permissions methods instead.
More specifically, you have to call Permissions.create after creating the folder and retrieving its ID.
Code snippet:
def shareWithEveryone(folderId, service):
payload = {
"role": "reader",
"type": "anyone"
}
service.permissions().create(fileId=folderId, body=payload).execute()
Reference:
permissions().create(fileId=*, body=None)
Creating a folder and changing the permissions are two diffrent calls.
Create your directory first
file = service.files().create(body=file_metadata).execute()
Then do a permissions.update To set the permissions on the file to be public.
permissions = service.permissions().update(body=permissions).execute()
I am not a python dev so the code is a guess
Related
Here i'm create to create an entry which takes a HTML file input. Now I'm trying to do with the following code but getting permission denied. How can I reference that HTML template in my testcase.
# CREATE EMAIL TEMPLATE
data = {
"user":user,
"platform_subscriber":project,
"name": "TESTCASE TEMPLATE",
"html_template": "../nice.html"
}
email_template, created = EmailTemplate.objects.custom_get_or_create(data)
print("The key",email_template)
self.email_template = email_template.id
log :
aise SuspiciousOperation("Attempted access to '%s' denied." % name)
django.core.exceptions.SuspiciousOperation: Attempted access to '../nice.html' denied.
Django can't access files that are stored above the working directory... otherwise your webapp could read things it shouldn't, which is a security risk.
Usually we would keep the template in the templates directory of your app.
So move ../nice.html to ./templates/nice.html and you should be fine.
There are many more sophisticated options to manage templates if you check the docs.
I'm tasked with migrating repos to gitlab and I decided to automate the process using python-gitlab. Everything works fine except for binary or considered-binary files like compiled object files ( .o ) or .zip files. (I know that repositories are not place for binaries. I work with what I got and what I'm told to do.)
I'm able to upload them using:
import gitlab
project = gitlab.Gitlab("git_adress", "TOKEN")
bin_content = base64.b64encode(open("my_file.o", 'rb').read() ).decode()
and then:
data = {'branch':'main', 'commit_message':'go away', 'actions':[{'action': 'create', 'file_path': "my_file.o", 'content': bin_content, 'encode' : 'base64'}]}
project.commits.create(data)
Problem is that content of such files inside gitlab repository is something like:
f0VMRgIBAQAAAAAAAAAAAAEAPgABAAAAAAAAAAAAA....
Which is not what I want.
If I don't .decode() I get error saying:
TypeError: Object of type bytes is not JSON serializable
Which is expected since I sent file opened in binary mode and encoded with base64.
I'd like to have such files uploaded/stored like when I upload them using web GUI "upload file" option.
Is it possible to achieve this using python-gitlab API ? If so, how?
The problem is that Python's base64.b64encode function will provide you with a bytes object, but REST APIs (specifically, JSON serialization) want strings. Also the argument you want is encoding not encode.
Here's the full example to use:
from base64 import b64encode
import gitlab
GITLAB_HOST = 'https://gitlab.com'
TOKEN = 'YOUR API KEY'
PROJECT_ID = 123 # your project ID
gl = gitlab.Gitlab(GITLAB_HOST, private_token=TOKEN)
project = gl.projects.get(PROJECT_ID)
with open('myfile.o', 'rb') as f:
bin_content = f.read()
b64_content = b64encode(bin_content).decode('utf-8')
# b64_content must be a string!
f = project.files.create({'file_path': 'my_file.o',
'branch': 'main',
'content': b64_content,
'author_email': 'test#example.com',
'author_name': 'yourname',
'encoding': 'base64', # important!
'commit_message': 'Create testfile'})
Then in the UI, you will see GitLab has properly recognized the contents as binary, rather than text:
Here's a simplified version of the JSON I am working with:
{
"libraries": [
{
"library-1": {
"file": {
"url": "foobar.com/.../library-1.bin"
}
}
},
{
"library-2": {
"application": {
"url": "barfoo.com/.../library-2.exe"
}
}
}
]
}
Using json, I can json.loads() this file. I need to be able to find the 'url', download it, and save it to a local folder called library. In this case, I'd create two folders within libraries/, one called library-1, the other library-2. Within these folder's would be whatever was downloaded from the url.
The issue, however, is being able to get to the url:
my_json = json.loads(...) # get the json
for library in my_json['libraries']:
file.download(library['file']['url']) # doesn't access ['application']['url']
Since the JSON I am using uses a variety of accessors, sometimes 'file', other times 'dll' etc, I can't use one specific dictionary key. How can I use multiple. Would there be a modular way to do this?
Edit: There are numerous accessors, 'file', 'application' and 'dll' are only some examples.
You can just iterate through each level of the dictionary and download the files if you find a url.
urls = []
for library in my_json['libraries']:
for lib_name, lib_data in library.items():
for module_name, module_data in lib_data.items():
url = module_data.get('url')
if url is not None:
# create local directory with lib_name
# download files from url to local directory
urls.append(url)
# urls = ['foobar.com/.../library-1.bin', 'barfoo.com/.../library-2.exe']
This should work:
for library in my_json['libraries']:
for value in library.values():
for url in value.values():
file.download(url)
I would suggest doing it like this:
for library in my_json['libraries']:
library_data = library.popitem()[1].popitem()[1]
file.download(library_data['url'])
Try this
for library in my_json['libraries']:
if 'file' in library:
file.download(library['file']['url'])
elif 'dll' in library:
file.download(library['dll']['url'])
It just sees if your dict(created by parsing JSON) has a key named 'file'. If so, then use 'url' of the dict corresponds to the 'file' key. If not, try the same with 'dll' keyword.
Edit: If you don't know the key to access the dict containing the url, try this.
for library in my_json['libraries']:
for key in library:
if 'url' in library['key']:
file.download(library[key]['url'])
This iterates over all the keys in your library. Then, whichever key contains an 'url', downloads using that.
I need to backup various file types to GDrive (not just those convertible to GDocs formats) from some linux server.
What would be the simplest, most elegant way to do that with a python script? Would any of the solutions pertaining to GDocs be applicable?
You can use the Documents List API to write a script that writes to Drive:
https://developers.google.com/google-apps/documents-list/
Both the Documents List API and the Drive API interact with the same resources (i.e. same documents and files).
This sample in the Python client library shows how to upload an unconverted file to Drive:
http://code.google.com/p/gdata-python-client/source/browse/samples/docs/docs_v3_example.py#180
The current documentation for saving a file to google drive using python can be found here:
https://developers.google.com/drive/v3/web/manage-uploads
However, the way that the google drive api handles document storage and retrieval does not follow the same architecture as POSIX file systems. As a result, if you wish to preserve the hierarchical architecture of the nested files on your linux file system, you will need to write a lot of custom code so that the parent directories are preserved on google drive.
On top of that, google makes it difficult to gain write access to a normal drive account. Your permission scope must include the following link: https://www.googleapis.com/auth/drive and to obtain a token to access a user's normal account, that user must first join a group to provide access to non-reviewed apps. And any oauth token that is created has a limited shelf life.
However, if you obtain an access token, the following script should allow you to save any file on your local machine to the same (relative) path on google drive.
def migrate(file_path, access_token, drive_space='drive'):
'''
a method to save a posix file architecture to google drive
NOTE: to write to a google drive account using a non-approved app,
the oauth2 grantee account must also join this google group
https://groups.google.com/forum/#!forum/risky-access-by-unreviewed-apps
:param file_path: string with path to local file
:param access_token: string with oauth2 access token grant to write to google drive
:param drive_space: string with name of space to write to (drive, appDataFolder, photos)
:return: string with id of file on google drive
'''
# construct drive client
import httplib2
from googleapiclient import discovery
from oauth2client.client import AccessTokenCredentials
google_credentials = AccessTokenCredentials(access_token, 'my-user-agent/1.0')
google_http = httplib2.Http()
google_http = google_credentials.authorize(google_http)
google_drive = discovery.build('drive', 'v3', http=google_http)
drive_client = google_drive.files()
# prepare file body
from googleapiclient.http import MediaFileUpload
media_body = MediaFileUpload(filename=file_path, resumable=True)
# determine file modified time
import os
from datetime import datetime
modified_epoch = os.path.getmtime(file_path)
modified_time = datetime.utcfromtimestamp(modified_epoch).isoformat()
# determine path segments
path_segments = file_path.split(os.sep)
# construct upload kwargs
create_kwargs = {
'body': {
'name': path_segments.pop(),
'modifiedTime': modified_time
},
'media_body': media_body,
'fields': 'id'
}
# walk through parent directories
parent_id = ''
if path_segments:
# construct query and creation arguments
walk_folders = True
folder_kwargs = {
'body': {
'name': '',
'mimeType' : 'application/vnd.google-apps.folder'
},
'fields': 'id'
}
query_kwargs = {
'spaces': drive_space,
'fields': 'files(id, parents)'
}
while path_segments:
folder_name = path_segments.pop(0)
folder_kwargs['body']['name'] = folder_name
# search for folder id in existing hierarchy
if walk_folders:
walk_query = "name = '%s'" % folder_name
if parent_id:
walk_query += "and '%s' in parents" % parent_id
query_kwargs['q'] = walk_query
response = drive_client.list(**query_kwargs).execute()
file_list = response.get('files', [])
else:
file_list = []
if file_list:
parent_id = file_list[0].get('id')
# or create folder
# https://developers.google.com/drive/v3/web/folder
else:
if not parent_id:
if drive_space == 'appDataFolder':
folder_kwargs['body']['parents'] = [ drive_space ]
else:
del folder_kwargs['body']['parents']
else:
folder_kwargs['body']['parents'] = [parent_id]
response = drive_client.create(**folder_kwargs).execute()
parent_id = response.get('id')
walk_folders = False
# add parent id to file creation kwargs
if parent_id:
create_kwargs['body']['parents'] = [parent_id]
elif drive_space == 'appDataFolder':
create_kwargs['body']['parents'] = [drive_space]
# send create request
file = drive_client.create(**create_kwargs).execute()
file_id = file.get('id')
return file_id
PS. I have modified this script from the labpack python module. There is class called driveClient in that module written by rcj1492 which handles saving, loading, searching and deleting files on google drive in a way that preserves the POSIX file system.
from labpack.storage.google.drive import driveClient
I found that PyDrive handles the Drive API elegantly, and it also has great documentation (especially walking the user through the authentication part).
EDIT: Combine that with the material on Automating pydrive verification process and Pydrive google drive automate authentication, and that makes for some great documentation to get things going. Hope it helps those who are confused about where to start.
I'm trying to use the pyfacebook functions (https://github.com/sciyoshi/pyfacebook/) in a Google app engine project. I've followed the advice on the Facebook developer forum (http://forum.developers.facebook.net/viewtopic.php?pid=164613) and added the additional functions to the __init__.py file, copied that file to the root directory of my project and renamed it facebook.py. Having imported facebook.py I added the following to the get(self) method for the Python class for the page:
facebookapi = facebook.Facebook(API_KEY, SECRET)
if not facebookapi.check_connect_session(self.request):
path = os.path.join(os.path.dirname(__file__), 'templates/login.html')
self.response.out.write(template.render(path, {'apikey': API_KEY}))
return
user = facebookapi.users.getInfo(
[facebookapi.uid],
['uid', 'name', 'birthday', 'relationship_status'])[0]
template_values = {
'name': user['name'],
'birthday': user['birthday'],
'relationship_status': user['relationship_status'],
'uid': user['uid'],
'apikey': API_KEY
}
path = os.path.join(os.path.dirname(__file__), 'templates/index.html')
self.response.out.write(template.render(path, template_values))
When running it I get the following error:
File "\much\baw08u\Private\IDS\helloworld\helloworld.py", line 54, in get
if not facebookapi.check_connect_session(self.request):
AttributeError: 'Facebook' object has no attribute 'check_connect_session'
So it seems to be loading the facebook API fine, but not the new methods I've added. I copied and pasted the code from the developer forum at the bottom of the Facebook class definition, and made sure all the indentation was right but it still doesn't seem to be picking them up. Does anyone know what might be the problem?
Thanks
Ben
You believe the Facebook class has a certain method but Python is sure it hasn't. Why? Maybe you misspelled the method name, maybe you did not get the indentation right - hard to say without seeing the code.
You could try poking around to validate your assumptions:
import facebook
import logging
logging.warn('Facebook class: %r', dir(facebook.Facebook))
logging.warn('facebook module: %r', dir(facebook))
If you are sure you are operating on the correct file, the you should expect to see check_connect_session as a method of Facebook. If you didn't add enough indentation then you expect to see check_connect_method as a function defined in the facebook module. Too much indentation would make check_connect_method a sub function of which ever method precedes it and it won't show up in the above logging. Pay close attention to indentation.
However, a better way to add some custom methods might be:
import facebook
class Facebook(facebook.Facebook):
def check_connect_session(request):
pass
facebookapi = Facebook(API_KEY, SECRET)
if not facebookapi.check_connect_session(...):
...
Now when Facebook update their code you simply copy the new file into place - no need to merge your customisations.