I was practicing some project from Kaggle notebook which is machine learning related code. and i am facing problem at one part. the error is NameError: name 'ImageList' is not defined. the Kaggle notebook link is this and the cell link is this. How can i fix it.
If you are getting this error even after defining the ImageList, Run the default first cell in the kaggle or make sure that U have run all the cells.
Even I faced the same issue during my initial practice on kaggle.
Related
I am new to RL and wanted to test some examples about it.
One of the examples is stock management problem and I wanted to run "rlpolicy_train.py" on google Colab.
The link for example:
https://github.com/t-wolfeadam/Alpyne/tree/main/examples/Stock%20Management%20Game
It seems that there is an error in Alpyne Library and I can't understand what is the meaning of this error and how I can fix it.
The screenshot of the error has been attached.
I would appreciate it if anyone can help.
When I try to use translate function in TextBlob library in jupyter notebook, I get:
HTTPError: HTTP Error 404: Not Found
I have posted my code and screenshot of error message for reference here. This code worked well 5-6 days ago when I ran exactly the same code first time but after that whenever I run this code it gives me the same error message. I have been trying to run this code since last 4-5 days but it never worked again.
My code:
from textblob import TextBlob
en_blob = TextBlob('Simplilearn is one of the world’s leading certification training providers.')
en_blob.translate(to='es')
I am new to stackoverflow and asking my first question on this plateform so please pardon me if I my question is not following rules of this platform.
Textblob library uses Google API for translation functionality in the backend. Google has made some changes in the its API recently. Due to this reason TextBlob's translation feature has stopped working. I noticed that by making some minor changes in translate.py file (in your folder where all TextBlob files are located) as mentioned below, we can get rid of this error:
original code:
url = "http://translate.google.com/translate_a/t?client=webapp&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
change above code in translate.py to following:
url = "http://translate.google.com/translate_a/t?client=te&format=html&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
I have just tried this. It did not work for me the first times.
I restarted the Anaconda Prompt, restarted IPython. And re-ran my snippets and the problem go away after the fix. I am using Windows 10, I don't use virtual environment, the two files that were changed are:
C:\Users\behai\anaconda3\pkgs\textblob-0.15.3-py_0\site-packages\textblob\translate.py
C:\Users\behai\anaconda3\Lib\site-packages\textblob\translate.py
And I have also found that I have to do tab-indentation for the newline.
Added on 02/01/2021:
I did not do anything much at all. I applied the suggestion by Mr. Manish ( green tick above ). I had this 404 problem in the Anaconda environment. After applied the change above, I just restarted the "Anaconda Prompt (anaconda3)", and it worked.
This is the change as suggested above, in:
C:\Users\behai\anaconda3\Lib\site-packages\textblob\translate.py
# url = "http://translate.google.com/translate_a/t?client=webapp&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
url = "http://translate.google.com/translate_a/t?client=te&format=html&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
The file below get auto-updated:
C:\Users\behai\anaconda3\pkgs\textblob-0.15.3-py_0\site-packages\textblob\translate.py
It's fixed at https://github.com/sloria/TextBlob/pull/398
You should use a tag version with that fix.
# requirements/txt
textblob # git+https://github.com/sloria/TextBlob#0.17.1#=textBlob
enter image description here
data science problem there is some error in my code please some one help with it. I have tried multiple times can anyone suggest what to doo
You have never defined a variable called churn.
I think you wanted to use the variable data, where you created the data frame.
I am trying to run this code file using google colab. Although I am getting some of the outputs but an error is showing up
AttributeError: module 'tensorflow._api.v2.train' has no attribute 'RMSPropOptimizer'
I looked this problem up on stack overflow like most problems I face but there's no solution. Someone, please help me understand what's wrong with the code. I am completely new to TensorFlow.
NOTE: I would've pasted the whole code here but it's a 1400+ line code so I directly hyperlinked the file as people might get annoyed and moreover, by doing so this post will get very long. But if needed, I can edit the post and paste the whole code here.
The correct name is RMSprop and its located under tf.keras.optimizers. Therefore, please replace
optimizer=tf.train.RMSPropOptimizer(1e-4)
with
optimizer=tf.keras.optimizers.RMSprop(1e-4)
TensorFlow version: v2.5.0
Sources and more info:
TensorFlow docs
Keras example
GitHub issue comment
Similar question on StackOverflow
I am trying to get the Wide & Deep tutorial working but the following line keeps giving me issues when copying and pasting the code from github and the website.
df_train["income_bracket"].apply(lambda x: ">50K" in x)).astype(int)
I get the below error
TypeError: argument of type 'float' is not iterable
I am not too familar with lamda functions but I think it is making a dummy variable so I tried that using
for i in range(len(df_train)):
if df_train.loc[i,'income_bracket']=='>50k':
df_train.loc[i,LABEL_COLUMN] =1
else:
df_train.loc[i,LABEL_COLUMN] =0
But got the error
TypeError: Expected binary or unicode string, got nan
How do I get this tutorial working?
EDIT:
first line of data and headers
lambda function is quite useful and simple. It won't create dummy variables.
I've noticed that you import the original data into a CSV file. Try not do that and just use the original downloading data that shown in tutorial code. I have successfully tried in this way.
But I also got the same problem when I change to other data sets for training. So I still hope someone can solve this problem in a deeper way
It's the issue of the data, or the code of TensorFlow. We have submit the issue about that in https://github.com/tensorflow/tensorflow/issues/4293
You can download the files manually and remove the broken lines. Then run with this command.
python ./wide_n_deep_tutorial.py --train_data /home/data/train_data --test_data /home/data/test_data