I am newbie in ML and Deep Learning and am currently working on Jupyter notebook.
I have an image dataset in the form of a zip file containing nearly 28000 images downloaded on my desktop.
However, I am not being able to find any code that will make Jupyter notebook unzip the file and read it, so I'm able to work with it and develop a model.
Any help is appreciated!
Are these threads from the past helpful?
How to unzip files into a directory in Python
How to unzip files into memory in Python (you may encounter issues with this approach if the images are too large to fit in Jupyter's allocated memory)
How to read images from a directory in Python
If you go the directory route, a friendly reminder that you'll need to update the code in each example to match your directory structure. E.g. if you want to save images to and read images from a directory called "image_data", then change the code examples to unzip files into that directory and read images from that directory.
Lastly, for finding out what files are in a directory (so that you can read them into your program), see this thread answering the same question.
Related
When I convert my file in .exe and I want for example an image to be displayed or a sound to be played on my app I am forced to put them in the same folder otherwise the image will not appear and the sound will not play.
How can I display a image(png)/play a sound (a mp3) if my .exe and the used ressources (the image and the sound) are in a different folder ?
I've done for example myimageframe = PhotoImage(file='myimage.png') and playsound('mysound.mp3')
I would like to access the resources even on another machine if I transfer my .exe to a friend or myself on another computer
for the image(an error is raised):
the sound just doesn't play (nothing happen)
file='myimage.png' is what's called a relative path - since there are no folders listed before, the app will look in the same folder where the .exe is executed from.
From the same machine
use absolute paths to the files (e.g. C:\Temp\myImage.png), or
change the working directory to another location at runtime (e.g. os.chdir("C:\Temp))
From another machine
find a way to embed the files' contents into the .py file (e.g. a base64-encoded string??),
find a way to embed the files' contents into the .exe (no idea here - read the docs for your .exe bundler?)
host the files on the public internet (e.g. a github repo, or file-supporting alternatives to PasteBin) & pull them down using requests
NOTE: that last option will require the running machine to have internet access, and should probably handle things gracefully if it doesn't....
I have a classifying model, and I have nearly finished turning it into a streamlit app.
I have the embeddings and model on dropbox. I have successfully imported the embeddings as it is one file.
However the call for AutoTokenizer.from_pretrained() takes a folder path for various files, rather than a particular file. Folder contains these files:
config.json
special_tokens_map.json
tokenizer_config.json
tokenizer.json
When using the tool locally, I would direct the function to the folder and it would work.
However I am unable to direct it to the folder on DropBox, and I cannot download a folder from DropBox into Python, only a file (as far as I can see).
Is there a way of creating a temp folder on Python or downloading all the files individually and then running AutoTokenizer.from_pretrained() with all the files?
To get around this, I uploaded the model to HuggingFace so I could use it there.
I.e.
tokenizer = AutoTokenizer.from_pretrained("ScoutEU/MyModel")
I am very new to Voila and Jupyter. I understand Jupyter notebook files (i.e. files with extension .ipynb) can be loaded in either Voila server or Jupyter server.
To elaborate, for instance, we have files below in the same folder: -
a.ipynb
b.ipynb
My question is that if it is possible for me only to load "a.ipynb" in Voila? The example is just for purpose of demonstration. We could have a large number of files / folders in the folder.
I have scanned through Voila website but doesn't look like there is any existing feature I can use to support this.
Thank you.
I have cloned a GitHub repository and it has successfully imported all of the .py files I need on the left. I have set up a for loop that takes all of the .py files and saves them into an array. Now I want to loop through and run them all so that I can use their classes and functions without actually having to repaste their code into colab.
Example of list of .py files
I thought the below method would work:
Error message
But as shown, it says there is no such file or directory. Anybody have any ideas?
Currently my program is using txt files (filled with data) that are located in the Desktop. I am going to be sending this out to users and the text files are going to be included in a installer. When installing I don't want these files to crowd the users desktop. Any ideas on this??
Kunwar, this is a somewhat subjective question, so this essentially is a subjective answer.
This is really about packaging. You've not provided any info on what your package lookslike, but in principle, why not just put the text files in the directory with your program?
your_program/inputs/*.txt
Then they will always be available to your tool. You can find the current location of your script within the script itself and build the path to the input file, no matter where the users have stored the script on their machines.