So I'm able to upload a file in SharePoint using office365 package, however in SharePoint it is not checked in and I had to manually do it. Is there any way function in that package to do check in?
here's my code:
def csv_to_sp(spfile_path,output):
ctx = Extract.connect()
target_folder = ctx.web.get_folder_by_server_relative_url(spfile_path)
with open(output,'rb') as content_file:
file_content = content_file.read()
target_folder.upload_file(output,file_content).execute_query()
response = 'Uploaded to sharepoint!'
return response
It sounds like the target library has one of the following going on which would cause the uploaded document to be checked out after upload:
Require Checkout is enabled. When this setting is enabled, new documents you upload are checked out to you by default. You can toggle this option from the Library Settings/Versioning Settings page as described here.
There are required fields without default values. When additional metadata fields are added to the library and are set to be required, there is an option to specify a default value. If no default value is specified, any new items uploaded to said library, will be checked out and visible only to the user who uploaded the document. These items will not become visible to anyone else (including admins) until the user does a manual check in, at which time the user would be forced to provide a value for the required field.
Since you did not reference having to specify values for fields, I suspect you probably have #1 going on in your library.
Try turning off the "Require Check Out" setting and retest.
Related
I'm using Python 2.7 and Zeep to call SuiteTalk v2017_2_0, the SOAP-based NetSuite web service API. The command I'm running is search like so:
from zeep import Client
netsuite = Client(WSDL)
TransactionSearchAdvanced = netsuite.get_type(
'ns19:TransactionSearchAdvanced')
TransactionSearchRow = netsuite.get_type('ns19:TransactionSearchRow')
# login removed for brevity
r = netsuite.service.search(TransactionSearchAdvanced(
savedSearchId=search, columns=TransactionSearchRow()))
Now the results of this include all the data I want but I can't figure out how (if at all) I can determine the display columns that the website would show for this saved search and the order they go in.
I figure I could probably netsuite.service.get() and pass the internalId of the saved search but what type do I specify? Along those lines, has anyone found a decent reference for all the objects, type enumerations, etc.?
https://stackoverflow.com/a/50257412/1807800
Check out the above link regarding Search Preferences. It explains how to limit columns returned to those only in the search.
I'm tinkering around with cloud storage api's and need help getting the full file name from an upload form on Django. I currently can let the user choose a file, "file.txt" from any directory, and can get the name through
for file in request.FILES.getlist('file'):
print file.name `
However I want the full file path, something more like 'home/user/documents/file.txt'
Is this possible and how would I get the full name? I'm not looking to actually upload the file, just get the full path so that I can utilize dropbox/google drive api.
For reference here is my form:
class UploadFileForm(forms.Form):
folder_name = forms.CharField(max_length = 300)
title = forms.CharField(max_length=50)
file = forms.FileField(label='Select a file')
Thanks in advance.
Browsers do not tell what directory the files come from.
They do not even give that information to the scripts on the page.
When a user uploads a file, you may know:
Its basename.
Its size.
Its type (usually guessed from extension).
Its modification time.
This is all you will get, regardless of whether you access the information straight from the browser's <input> element, or wait for it to POST it.
You may also turn this answer the other way around: if you really need to be able to post the full path of the file, you need to develop a client-side application that will send it. It could be a standalone executable, a browser addon/app, or a Java applet, whatever as long as it runs outside of the webpage sandbox.
I do not use dropbox, but I believe you need to download and install some additional software to use it. That's how it would access the full path of your files.
I need a component that's a browser-based file browser, and I expect some django app to currently provide this. Is there such a thing?
The full story:
I'm building a django app that is used for testing. I want to use it to serve files (and strings, and etc.) and attach custom headers to it.
Currently, I have a model FileSource which has a single file_path field, which is of type django.db.models.FileField.
When creating a FileSource from the admin, the user has a nice file upload dialog, and when saving, the file he chose, is saved on the server (in a really weird location, inside the directory where django is installed, or something weird like that, because i didn't customize the storage, nor will it help me in any way)
My problem: I only want to use the file dialog for the user to select a full path on the server. The file that the user chose must be only referenced, not copied (like currently), and it must reside on the server.
The server must thus be able to list the files it has, so i basically need a little browser-based file-browser.
At that point, I expect to be able to save a full path in my DB, and then I'll be able to access that file and serve it (together with whatever custom headers the user will chose from my app).
Currently, as you might know, the browsers always lie about the full path of the file. Chromium appends "C:\fakepath" to the file name, so I need support of the backend to accomplish this.
Also, I checked out django-filebrowser and django-filer and from what I understood, they weren't built for this. If I'm wrong, a little assistence in configuring them would be awesome.
You can use a FilePathField for that. It won't upload a file, but rather allow you to choose a pre-existing file. A caveat is that you can only use one directory. If you need multiple directories, then you'd need do go with something like django-filer.
I want users to be able to create the report template in Microsoft Word, I'll then probably add document fields. Then the script evaluates a number of things adds the appropriate text to the fields then creates a pdf of the filled in form.
So which modules would be best for this? I've looked at reportlab but I need to work from a pre-generated template and that doesn't seem feasible.
If you will use it only under Windows, having Word installed you could use PyWin32 that lets you access the api of the suite. You could also try IronPython as suggested here.
If you need to read a docx template regardless of the platform you could try this outdated extension.
If it suits your application to use a cloud service to populate Doc/DocX files there is a commercial system called Docmosis that can popluate plain-text (or merge) fields and stream back populated PDF documents to your Python system, or deliver via email etc.
You would upload your "template" Doc files to Docmosis via the Website (or api calls) then invoke Docmosis using a https post from your Python code.
Please note I work for the company that created Docmosis.
Hope that helps.
I have a form where I upload a file, and I generate a report out of it. The thing is, I would also like to make the report available for download, as an archive. I would like to somehow include the CSS and the JS ( that I inherit from my layout ) inside the report, but I don't really know how to go about this.
So far, I am not storing the file ( the report's being generated from ) on server side, I delete it after I'm done with it.
The only solution I could think of so far, was: from my archive generating view, use urllib to post to the form generating the report, save the response, and just rewrite the links to the stylesheet/JS files.
Is there a simpler way to go about this? Is there a way to keep some files on server side as long as the client's session lives?
Use the HttpResponse in the view you use to show the generated report instead of posting with urllib. If you have something along the lines of
def report_view(request):
...
return render_to_response(request,....)
Then use the response object to create an archive
def report_view(request):
...
archive_link = "/some/nice/url/to/the/archive"
response = render_to_response(request, ... { "archive-link" : archive_link})
store_archive(response)
return response
def store_archive(response):
# here you will need to find css/js files etc
# and bundle them in whatever type of archive you like
# then temporarily store that archive so it can be accessed by the archive_link
# you previously used in your view to allow for downloading
def report_archive_view(request):
# serve the temporarily stored archive, then delete it if you like
You can find all you need to know about HttpResponse in the Django docs.
Although this might work for you I doubt that is what you are really after, maybe what you are really looking for is to generate a pdf report using ReportLab?
you could always keep the files, and have a cron job that deletes files whose session has expired