Anyone know to to upload a .csv file to a folder inside a blob container in python?
i'm having difficulty trying to acess the folders inside it.
i have the csv and want to save it inside the blob folder, but it didn't work.
the file is in code, so i dont want to pass the directory where it is.
csv = df.to_csv()
block_blob_service.create_blob_from_path(container_name, 'folder/csv/mycsv/' , csv)
Someone knows how i can save the csv directly to the folder inside the storage folder (folder/csv/mycsv/) in azure?
i got an error stat: path too long for Windows
Reading the documentation of DataFrame.to_csv, I believe csv variable actually contains string type data. If that's the case, then you will need to use create_blob_from_text method.
So your code would be:
csv = df.to_csv()
block_blob_service.create_blob_from_text(container_name, 'folder/csv/mycsv/' , csv)
Related
I have a script that takes an input text file then finds data in it, puts that data as a variable, then later I call that variable to write to a new file. This snippet of code is just for reading the txt file and storing the data from it as variables.
searchfile = open('C://Users//Me//DynamicFolder//report//summary.txt','r', encoding='utf-8')
slab_count=0
slab_number=[]
slab_total=0
for line in searchfile:
if "Slab" in line:
slab_num = ([float(s) for s in re.findall(r'[-+]?(?:\d*\.\d+|\d+)', line)])
slab_percent = slab_num[-1]
slab_number.append(slab_percent)
slab_count=slab_count+1
slab_total=0
for slab_percent in slab_number:
slab_total+=slab_percent
searchfile.close()
I am using xlsxwriter to write the variables to an excel doc.
My question is, how do I iterate this to search through a given directories sub-directories for summary.txt when there is a dynamic folder.
So C://Users//Me//DynamicFolder//report//summary.txt is a path to one of the files. There are several folders I named DynamicFolder that are there because another process puts them there, they change their names all the time. I need have this script go into each of those dynamic folders to a subdir called report, this is a static name and is always the same. So each of those dynamicfolders has another subdir called report, and in the report folder is a file called summary.txt. I am trying to go through each of those dynamicfolders into the subdir report > summary.txt and then opening and writing data from those txt files.
How do I iterate or loop this? Right now I have 18 folders with those DynamicFolder names that will change when they are over written. How can I put this snip of code to iterate through?
for path in Path('C://Users//Me//DynamicFolder//report//summary.txt').rglob('summary.txt'):
report folder is not the only folder with a summary.txt file, but its the only folder with the file I want. So this code above pulls ALL summary.txt files from all subdir's under the DynamicFolder (not just report folder). I am wondering if I can make this JUST do the 'report' subdir folders under DynamicFolders, and somehow use this to iterate the rest of my code?
I am trying to write a simple function that exports the resulting dataframe to my local drive.
def finding(dataset, x='target'):
test=dataset.loc[dataset['Title']==x]
return
test.to_excel(r'C:\testing.xlsx')
This is how I executed the function.
finding(df, x='Manager')
However, I am not able see the excel file on my local drive.
Am I missing something?
Thanks.
I am writing a script to pull xml files from an FTP, turn them into an .xlsx file, and re-upload to a different directory on the same FTP. I want to create the .xlsx file within my script instead of copying the xml data into a template and uploading my local file.
I tried creating a filename for the .xlsx doc, but i realize that i need to save it before i can upload to the FTP. My question is, would it be better to create a temporary folder on the server the script is being run and empty the folder out afterwards? or is there a way to upload the doc without saving it anywhere (preferred)? I will be running the script on a windows server
ftps.cwd(ftpExcelDir)
wbFilename = str(orderID + '.xlsx')
savedFile = saving the file somwhere # this is the part im having trouble with
ftps.storline('STOR ' + wbFilename, savedFile)
With the following code, i can get the .xlsx files to save to the FTP, but i recieve an invalid extension/corrupt file error from Excel:
ftps.cwd(ftpExcelDir)
wbFilename = str(orderID + '.xlsx')
inMemoryWB = io.BytesIO()
wb.save(inMemoryWB)
ftps.storbinary('STOR ' + wbFilename, inMemoryWB)
The ftp functions take file objects... but those don't strictly speaking need to be files. Python has BytesIO and StringIO objects which act like files, but are backed by memory. See: https://stackoverflow.com/a/44672691/8833934
I am trying to open a csv file in write mode using csv writer it works file in local but when i try to do the same in aws lambda it says read only file system. I am sure that I am opening in write binary mode.
Below is the code for reference.
f = csv.writer(open('abc.csv','wb+'))
f.writerow(['botName','botVersion','utteranceString','count','distinctUsers','firstUtteredDate','lastUtteredDate','status'])
below is the error i am getting:
[Errno 30] Read-only file system: 'abc.csv' this is exception
edit 1
above error is fixed by adding /tmp/ in the file path but I am not able to move csv file created in /tmp to s3 bucket
I used below code
s3_u.meta.client.upload_file( '/tmp/'+output_filename, 'codepipelinedev',k)
this is generating empty file in s3 bucket. and it is throwing an error if i test with non existing file.
when I tried the same thing in local, csv files are created with expected data in the files. but while transfering those files I am getting empty files in our S3 bucket.
Appreciate a help in this
Thanks in advance
AWS Lambda functions only have write access to the /tmp folder within the Lambda runtime environment. If you need to modify that file you need to first copy it to /tmp and then modify it there.
i want to upload zip folder from file input in form the i want to extract the contents of this uploaded zip folder,and store the contents (files)of this zip in the blobstore in order to download them after putting these files in one folder,but the problem is that i can't deal with the zip folder directly(to read it), i tried as this:
form = cgi.FieldStorage()
file_upload = form['file']
zip1=file_upload.filename
zipstream=StringIO.StringIO(zip1.read())
But the problem still that i can't read the zip as previous,also i tried to read zip folder directly like this:
z1=zipfile.ZipFile(zip1,"r")
But there was an error in this way.Please can any one help me.Thanks in advance.
Based on your comment, it sounds like you need to take a closer look at the cgi module documentation, which includes the following:
If a field represents an uploaded file, accessing the value via the value attribute or the getvalue() method reads the entire file in memory as a string. This may not be what you want. You can test for an uploaded file by testing either the filename attribute or the file attribute. You can then read the data at leisure from the file attribute...
This suggests that you need to modify your code to look something like:
form = cgi.FieldStorage()
file_upload = form['file']
z1 = zipfile.ZipFile(file_upload.file, 'r')
There are additional examples in the documentation.
You don't have to extract files from the zip in order to make them available for download - see this post for an example of serving direct from a zip. You can adapt that code if you want to extract the files and store them individually in the blobstore.