Not able to download entire directory from azure file share in python
I have used all basic stuffs available in google
share = ShareClient.from_connection_string(connection_string, "filshare")
my_file = share.get_file_client("dir1/sub_idr1")
# print(dir(my_file))
stream_1 = my_file.download_file()
I tried in my environment and got below results:
Initially I tried with python,
Unfortunately, ShareServiceClient Class which Interacts with A client to interact with the File Share Service at the account level. does not yet support Download operation in the Azure Python SDK.
ShareClient Class which only interacts with specific file Share in the account does not support Download Directory or file share option Python SDK..
But there's one class ShareFileClient Class which supports downloading of individual files inside a directory but not entire directory, you can use this class to download the files from directory with Python SDK.
Also you check Azure Portal > Select your Storage account > File Share and Directory > There's no option in Portal too to download the directory, but there's an option to download specific file.
As workaround, if you need to download directory from file-share, you can use Az copy tool command to download the directory in your local machine.
I tried to download the Directory with Az-copy command and was able to download to it successfully!
Command:
azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3xxxxxxxx' 'C:\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
Console:
Local environment:
Related
We have a python application on a window azure VM that reads data from an API and loads the data into an onprem DB. The application works just fine and the code is source controlled in an Azure devops repo. The current deployment process is for someone to pull the main branch and copy the application from their local machine to c:\someapplication\ on the STAGE/PROD server. We would like to automate this process. There are a bunch of tutorials on how to do a build for API and Web applications which require your azure subscription and app name (which we dont have). My two questions are:
is there a way to do a simple copy to the c:\someapplication folder from azure devops for this application
if there is, would the copy solution be the best solution for this or should I consider something else?
Should i simply clone the main repo to each folder location above and then automate the git pull via the azure pipeline? Any advice or links would be greatly appreciated.
According to your description, you could try to use the CopyFiles#2 task and set the local folder as shared folder, so that use it as TargetFolder. The target folder or UNC path that will contain the copied files.
YAML like:
- task: CopyFiles#2
inputs:
SourceFolder:$(Build.SourcesDirectory) # string. Source Folder.
Contents: '**' # string. Required. Contents. Default: **.
TargetFolder: 'c:\\someapplication' # string. Required. Target Folder.
Please check if it meets your requirements.
I saved file to the same directory using (./folder_name) when I use AzureML jupyter. Now how can I download to my local machine or blob storage?
The folder have a lot of files and sub-directory in it, which I scraped online. So it is not realistic to save one by one.
file_path = "./"
for i in target_key_word:
tem_str = i.replace(' ', '+')
dir_name = file_path + i
if not os.path.exists(dir_name):
os.mkdir(dir_name)
else:
print("Directory " , dir_name , " already exists")
Appreciate this is an old question but for anyone else with this questions this is how I did it.
Open a new terminal window in your Azure ML Notebooks homepage.
Zip up the entire directory with:
zip -r myfiles.zip .
Refresh the folder explorer and download the zip file.
If you are asking how you could download all your files and folders from the Notebooks GUI of Azure ML Studio, I wish there was a straightforward way to do so. As far as I know, you could do the following for now:
You have to download Azure Storage Explorer and navigate to the blob storage linked to your ML workspace in the Azure portal (you can find the precise blob storage account by navigating to the overview section of your ML workspace).
After you open the linked blob storage from the resource group, navigate to the overview section where you should select Open in Explorer.
Once the Storage Explorer is open you can link the Azure blob storage by connecting it to the Explorer (which is pretty straightforward, you just have to fetch the key from the Access Keys section).
After your storage account is linked, navigate to the File Shares folder and find the folder prefixed code-. This is where you will find all your files and folders, which you can download to your local in bulk.
From my understanding you would like to download the project files :
You could do one of the below :
From the project page, click the "Download Project" button to download all the files of a given project.
OR
Go to Azure Notebooks and sign in.
From your public profile page, select My Projects at the top of the
page.
My Projects link on the top of the browser window
Select the project.
Click the download Download button to trigger a zip file download
that contains all of your project files.
Another option for downloading Notebooks is to go through the azure ML studio UI to "Edit in VS Code." Once you have it open in VS Code, right click the folder and there is an option "Download"
You can save these files to local and then re-upload as needed.
I installed the CLI rest api and now I want to save a test file to my local desktop. This is the command I have but it throws me a syntax error:
dbfs cp dbfs:/myname/test.pptx /Users/myname/Desktop.
SyntaxError: invalid syntax
dbfs cp dbfs:/myname/test.pptx /Users/myname/Desktop
Note, I am on a Mac, so hopefully the path is correct. What am I doing wrong?
To download files from DBFS to local machine, you can checkout similar SO thread, which addressing similar issue:
Not able to copy file from DBFS to local desktop in Databricks
OR
Alternately, you can use GUI tool called DBFS Explorer to download the files on you local machine.
DBFS Explorer was created as a quick way to upload and download files to the Databricks filesystem (DBFS). This will work with both AWS and Azure instances of Databricks. You will need to create a bearer token in the web interface in order to connect.
Reference: DBFS Explorer for Databricks
Hope this helps.
I have a Django app deployed on a Azure Web App, and I want to dynamically create webjobs. More precisely, when I save a Django model called Importer, I want to be able to create a new web job, made of 3 files:
run.py : the Python webjob
settings.job : the cron schedule
config.ini : a config file for the webjob
The content of the "settings.job" & "config.ini" comes from an admin form and is specific to the webjob.
When I save my model, the code creates a new directory in
App_Data\jobs\triggered{my job name}
, and copies there the "run.py" from my app directory.
This works. However, when I try to create a new text file called "settings.job" in the job directory and write the cron schedule in it, I got a server error.
I tried many things, but the following basic test causes a failure:
file = open('test.txt','w')
file.write('this is a test')
file.close()
It seems that I don't have the right to write a file to the disk. How can that be solved?
Also, I want to be able to modify the content of the config and settings.job files when I update the corresponding Django model.
In addition, I tried to copy another file called "run2.py" to the webjob directory, and that fails too ! I cannot copy another file that run.py in that directory
According to your description, per my experience, I think the issue was caused by using the relative path in your code.
On Azure WebApps, we have the permission of doing any operations under the path D:/home.
My suggestion is using the absolute path instead of the relative path, such as D:/home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt or /home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt instead of test.txt. And please make sure the directory & parent directories had been made via use os.path.exists(path) to check the path exists and use os.makedirs to make them before writing a file.
Meanwhile, you also can try to use the WebJobs API to do some operations, such as creating a webjob via uploading a zip file or updating settings.job. For using these WebJobs APIs, be sure you had configured the Deployment credentials on Azure portal as the figure below, and add the basic auth Authorization: Basic <BASE64-Encode("deployment-username":"password")>in the request header.
Here is my analysis:
there is no problem copying other files than "run.py" to the webjob directory
the code crashes after (successfully) copying the file "run.py" from my Django app directory to the webjob directory. It does not matter if I use shutil.copy/shutil.copyfile or simply open("{path}/run.py","w"), the problem occurs when I try to write a file called "run.py" to the disk in the webjob directory.
I think that when we create a new webjob directory, if the system detects a file called "run.py" it tries to carry out some operations. Then there is a conflict with two processes trying to access the same file at the same time.
My solution: I copy the python script to the webjob directory with the name "myrun.txt", and then I rename it to run.py using os.rename
It works !
I am working on a Python based web application for collaborative xml/document editing, and one requirement from the client is that users should be able to push the files they created (and saved on the server) directly to a Github remote repo, without the need of ever creating a local clone on the server (i.e., no local working directory or tracking of any sort). In GUI terms, this would correspond to going to Github website and manually add the file to the remote repo by clicking the "Upload files" or "create new file" button, or simply edit the existing file on the remote repo on the Github website and then commit the change inside the web browser. I wonder is this functionality even possible to achieve either using some Python Github modules or writing some code from scratch using the Github API or something?
So you can create files via the API and if the user has their own GitHub account, you can upload it as them.
Let's use github3.py as an example of how to do this:
import github3
gh = github3.login(username='foo', password='bar')
repository = gh.repository('organization-name', 'repository-name')
for file_info in files_to_upload:
with open(file_info, 'rb') as fd:
contents = fd.read()
repository.create_file(
path=file_info,
message='Start tracking {!r}'.format(file_info),
content=contents,
)
You will want to check that it returns an object you'd expect to verify the file was successfully uploaded. You can also specify committer and author dictionaries so you could attribute the commit to your service so people aren't under the assumption that the person authored it on a local git set-up.