I have a classifying model, and I have nearly finished turning it into a streamlit app.
I have the embeddings and model on dropbox. I have successfully imported the embeddings as it is one file.
However the call for AutoTokenizer.from_pretrained() takes a folder path for various files, rather than a particular file. Folder contains these files:
config.json
special_tokens_map.json
tokenizer_config.json
tokenizer.json
When using the tool locally, I would direct the function to the folder and it would work.
However I am unable to direct it to the folder on DropBox, and I cannot download a folder from DropBox into Python, only a file (as far as I can see).
Is there a way of creating a temp folder on Python or downloading all the files individually and then running AutoTokenizer.from_pretrained() with all the files?
To get around this, I uploaded the model to HuggingFace so I could use it there.
I.e.
tokenizer = AutoTokenizer.from_pretrained("ScoutEU/MyModel")
Related
So I have been trying to make this file compatible to Google Colab but I'm not able to find any way to do it.
[]
EfficientDet-DeepSORT-Tracker is the main folder of this entire package
This picture is from one of the files placed alongside backbone.py
How to fix the fact that the file isn't able to detect backbone.py?
EDIT for more context: I shared the errors I found when trying to run waymo_open_dataset.py which isn't able to detect the other .py files alongside it.
According to this past question you could import 'filename' of the filename.py. So in the main.py file you are trying to run in colab, then import the required files in the main.py file.
I have cloned a GitHub repository and it has successfully imported all of the .py files I need on the left. I have set up a for loop that takes all of the .py files and saves them into an array. Now I want to loop through and run them all so that I can use their classes and functions without actually having to repaste their code into colab.
Example of list of .py files
I thought the below method would work:
Error message
But as shown, it says there is no such file or directory. Anybody have any ideas?
I'm trying to set up an api on azure's web app service using bottle + anaconda packages.
I can't simply use a copy of the site-packages folder because numpy is involved. Instead, in addition to the site-packages folder I must also give numpy access to the mkl binaries. So I copy the Anaconda\envs\{ENV_NAME}\Library\bin folder into the app and add it to %PATH%. That folder has less than 200 files in it, so I'm surprised seeing the following error during the deployment:
2020-10-29T04:34:21.3218237Z ##[error]Error: EMFILE: too many open files, open 'D:\a\_temp\temp_web_package_058969368946595324\site-packages\statsmodels\tsa\arima\datasets\__init__.py'
Everything builds and runs as long as I don't include the bin folder to %PATH%
No, I'm not close to my file size limit on the azure web app service. Has anyone run into this before?
This error happens because of the XDT Transform.
During an XDT Transform, all contents of the original package are transformed and then zipped up. This error is thrown if the deployment is significantly large.
I am newbie in ML and Deep Learning and am currently working on Jupyter notebook.
I have an image dataset in the form of a zip file containing nearly 28000 images downloaded on my desktop.
However, I am not being able to find any code that will make Jupyter notebook unzip the file and read it, so I'm able to work with it and develop a model.
Any help is appreciated!
Are these threads from the past helpful?
How to unzip files into a directory in Python
How to unzip files into memory in Python (you may encounter issues with this approach if the images are too large to fit in Jupyter's allocated memory)
How to read images from a directory in Python
If you go the directory route, a friendly reminder that you'll need to update the code in each example to match your directory structure. E.g. if you want to save images to and read images from a directory called "image_data", then change the code examples to unzip files into that directory and read images from that directory.
Lastly, for finding out what files are in a directory (so that you can read them into your program), see this thread answering the same question.
i am trying to train YOLOv3 with custom dataset on Google Colab. I uploaded my folders, weights etc. When I run my train.py, I get path error. I run the code like this:
!python3 "drive/TrainYourOwnYolo/2_Training/Train_YOLO.py"
The error says,
content/TrainYourOwnYolo/2_Training/dataset/img20.jpg is not found.
As I understand on Colab, my all folders are under drive folder. I don't understand why yolo is trying to find my dataset under content folder. Do you have any idea?
As it seems, you have uploaded your data to /drive/TrainYourOwnYolo/, and not to /content/TrainYourOwnYolo/, where your script is looking.
The /content folder is normally used by Colab when saving, in case you don't use Google Drive. But you have mounted your Google Drive under /drive, so your notebook unsurprisingly fails to find the files.
You should change the file paths in your Train_YOLO.py" script to replace references to /content with /drive.
If this is not possible, you can find the /content folder on the file catalogue on the left of your Colab notebook:
and by right-clicling on it, you'll see an option for uploading files there.