My code:
csv = pd.read_html(table)[0].to_csv('datasource_files/testtable7.csv',index=False,header=False)
conn = tinys3.Connection(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY,endpoint='s3-us-west-2.amazonaws.com/')
csv_file=open('datasource_files/testtable7.csv')
csv_file=open('datasource_files/testtable7.csv','rb')
csv_name= 'datasource_files/testtable7.csv'
conn.upload(csv_name,csv_file,'datafix1')
ds = DataSource.objects.create(file=csv_name,datatype="CSV",creator=mike, title="title",description="desc")
DataSource is a Django model, file is a models.FileField(). Currently, ds.file is http://datafix1.s3-us-west-2.amazonaws.com/datasource_files/testtable7.csv but that file doesn't exist. In order to access the file uploaded I need to go to http://datafix1.s3-us-west-2.amazonaws.com//datasource_files/testtable7.csv (which has an empty directory appended at the beginning of the pathname, because tinys3 is not recognizing the already existing (important) "datasource_files" folder (at least I believe that is the reason, I may be wrong). Can anyone help? Thanks in advance.
I used boto3 instead, works like a charm
Related
I have a Sharepoint folder, where every week several files would be dumped. Using python, I would like to download all the files from to a local folder location, do some transformations and upload a single consolidated file back to different location on Sharepoint. But whatever calls I am making, returns an empty JSON file.
Here is the code I have tried till now:
import sharepy
sess = sharepy.connect(site='company.sharepoint.com', username='username', password='password')
site = r'https://company.sharepoint.com/'
path = r'/some-site/Documents/Folder1/Folder2/Folder3/'
r = sess.get(site + """_api/web/GetFolderByServerRelativeUrl('"""+path+"""')/Files""")
r
r.json()['d']['results']
r object is a <Response [200]>.I want to download all the files in Folder3, but it returns empty. If I goto the Sharepoint website, I can see all the files using the same username used to create sess object.
I am getting path variable from the bottom of the details pane. I have tried many other options than GetFolderByServerRelativeUrl, but couldn't seems to be working.
I am new to Python and have no clue about Rest APIs.
Thank you for your help.
You need to get a list of files
# Get list of all files and folders in library
files = s.get("{}/_api/web/lists/GetByTitle('{}')/items?$select=FileLeafRef,FileRef"
.format(site, library)).json()["d"]["results"]
for the complete solution, check this:
https://github.com/JonathanHolvey/sharepy/issues/5
I have to process a file on an SFTP server and, when done, move that file to an archive directory using Paramiko. However, if the file already exists in the archive directory, I want to rename the file at the same time. I have the basics for detecting the existing file in archive and adjusting the name. Basically, the final call looks like:
client.rename('/main-path/file.txt', '/main-path/archive/file_1.txt')
or
client.posix_rename('/main-path/file.txt', '/main-path/archive/file_1.txt')
These commands work on SOME servers with no problem. On other servers, I get an "Errno 2" error from paramiko.
Am I going about this wrong? Maybe I need to rename the file, in place, first?
client.rename('/main-path/file.txt', '/main-path/file_1.txt')
and then
client.rename('/main-path/file_1.txt', '/main-path/archive/file_1.txt')
???
Any help would be appreciated.
You can check if the file exist by using OS
import os
list_element= os.listdir("/main-path/archive")
if "file.txt" in list_element:
client.posix_rename('/main-path/file.txt', '/main-path/archive/file_1.txt')
else:
client.posix_rename('/main-path/file.txt', '/main-path/archive/file.txt')
Let's say I have a folder containing the main-file main.py and a configuration-file config.json.
In the JSON-file there are multiple entries containing different filenames and the name of a method within each file.
Depending on user input from main.py I get the corresponding filename and the method from config.json and now I want to execute that method.
I tried the following
file = reports [id]["filename"]
method = reports [id]["method"]
# which would give me: file = 'test.py' and method = 'execute'
import file
file.method()
This didn't work obviously. The problem is that I can't know which files there are during compilation.
Other developers would add scripts to a specific folder and add their entries to the configuration file (kind of like a plugin). Does anybody have a solution here?
Thanks in advance
Thanks for your input... importlib was the way to go. I used this answer here:
How to import a module given the full path?
I'm trying to find a way to use the function "identify" from imagemagick without saving the image file in app-root/data. Basically I want to validate the image file before actually saving the file into the destination. If saved into app-root/data, i can easily just do this:
(temp, tempError) = (subprocess.Popen(['identify', '../data/' + filename + extension], stdout=subprocess.PIPE)).communicate()
But this would require the image to be uploaded first before identifying. Any ways to do this?
You should save the file into the /tmp directory first, then validate it, and if it passes, then move it to your ~/app-root/data directory.
You can check out this Filesystem Overview on the Developer Portal for more information: https://developers.openshift.com/en/managing-filesystem.html
Does anyone know how to compare amount of files and size of the files in archiwum.rar and its extracted content in the folder?
The reason I want to do this, is that server I'am working on has been restarted couple of times during extraction and I am not sure, if all the files has been extracted correctly.
.rar files are more then 100GB's each and server is not that fast.
Any ideas?
ps. if the solution would be some code instead standalone program, my preference is Python.
Thanks
In Python you can use RarFile module. The usage is similar to build-in module ZipFile.
import rarfile
import os.path
extracted_dir_name = "samples/sample" # Directory with extracted files
file = rarfile.RarFile("samples/sample.rar", "r")
# list file information
for info in file.infolist():
print info.filename, info.date_time, info.file_size
# Compare with extracted file here
extracted_file = os.path.join(extracted_dir_name, info.filename)
if info.file_size != os.path.getsize(extracted_file):
print "Different size!"