I am following a deep learning tutorial book in Japanese and it is using MNIST for its handwritten images. It has the code from dataset.mnist import load_dataset, and when I tried it, it did not work, gave an error saying no such module named dataset.mnist. I have downloaded the modules dataset and mnist individually using pip. The book recommended to use Anaconda, but I have tried it to no success.
How can I use the module dataset.mnist?
The first question I want to ask you is which deep learning framework are you working with for solving your problem.
There are many deep learning frameworks. PyTorch, Tensorflow, and Keras are examples of such frameworks.
1) If you are working with PyTorch framework then you should import the torch framework using the command.
import torch
and then you can import MNIST dataset using the command
from torchvision.datasets import MNIST
2) For Keras framework use the following commands for importing MNIST dataset.
import keras
from keras.datasets import mnist
NOTE: This can be written as well for better understanding of your problem.
import keras
from keras.datasets as datasets
and then you can import MNIST dataset using thedatasets which is an alias of keras.datasets
Similarly, you can import MNIST dataset in other frameworks as well.
Hope this will be of your help.
Adding on to #SauravRai's Answer
For tensorflow :
from tensorflow.examples.tutorials.mnist import input_data
input_data.read_data_sets('my/directory')
Related
Intellisense works fine on importing phrase
But when it comes with chaining method, it shows different suggestions
Python & Pylance extensions are installed.
From this issue on github
try adding this to the bottom of your tensorflow/__init__.py (in .venv/Lib/site-packages/tensorflow for me)
# Explicitly import lazy-loaded modules to support autocompletion.
# pylint: disable=g-import-not-at-top
if _typing.TYPE_CHECKING:
from tensorflow_estimator.python.estimator.api._v2 import estimator as estimator
from keras.api._v2 import keras
from keras.api._v2.keras import losses
from keras.api._v2.keras import metrics
from keras.api._v2.keras import optimizers
from keras.api._v2.keras import initializers
# pylint: enable=g-import-not-at-top
The problem is because keras is a special class that enables lazy loading and not a normal module.
Edit: With updates to tf, vscode, or something else I'm not having this issue and don't need to use the above fix anymore. I just have to use keras = tf.keras instead of from tensorflow import keras and I have Intellisense working now.
did you try clearing the cache on your system?
Try this
Don't import it directly like this
import tensorflow as tf
import tensorflow.keras as keras
Instead Do
import tensorflow as tf
keras = tf.keras
After this change, Everything was fixed and started showing better suggestions including function documentations
tensorflow.python.keras is for developers only and should not be used, but I think it is fine to be used as "type". I have also read it is a different version than the tensorflow.keras so have this in mind.
# Those are the imports, that actualy load the correct code
import tensorflow.keras as tfk
import tensorflow.keras.layers as layers
# This is for typehinting and intllisense
import tensorflow.python.keras as _tfk
import tensorflow.python.keras.layers as _layers
# This gets highlighted as error by my linter, but it runs
tfk: _tfk
layers: _layers
# from now on, the intellisense and docstrings work
# ...
While keras = tf.keras does the trick, I was dumbstruck that IntelliSense on my home machine wasn't working. Turns out, the Jupyter notebook I was using wasn't using the right Python interpreter (conda environment with tf and keras both # 2.11.0) due to a window reload or whatever.
This worked for me using conda with cuda and tensoflow:
import tensorflow as tf
from tensorflow import keras
from keras.api._v2 import keras as KerasAPI
KerasAPI.applications.ResNet50()
I am very confused after reading a lot about keras and tensorFlow, and still have some basic questions in my mind.
My confusion started from the answer of this question, where he writes keras standalone and from tensorflow.keras import keras.
1- (Python case):
Does keras use any backend when I write this line of code import keras, and No single line of code related to tensorflow e.g tf.keras or tf.keras.layers in my full implementation of the model, but only import keras? if it does, then is there any way to check what backend is being used?
2- Same question in the case of R Language.
3 - Is TensorFlow only used as backend when we write import tensorflow as tf and import tf.keras ?
4- Does import keras and import tf.keras have any discrepency in performance and accuracy in case of python?
5- Does versions of keras and tensorFlow have an impact in performance and accuracy in both language (R and Python) ?
6- What could be the reasons to have 5% accuracy difference in R and Python. Python gives 94%, while the same implementation in R gives 89% accuracy. The versions of keras & tensorFlow in R are 2.3.0, 2.2.0, while the versions in Python are : tf: 2.3.0, keras: 2.4.3. Please see this one.
I'm am getting this error just in the being of importing my packages. I haven't been able to find the correct remedy to fix the issue. Any help is greatly appreciated.
From what I can tell it looks to maybe be a Tensorflow issue?
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
found this
Background
Colab has two versions of TensorFlow pre-installed: a 2.x version and a 1.x version. Colab uses TensorFlow 2.x by default, though you can switch to 1.x by the method shown below.
Specifying the TensorFlow version
Running import tensorflow will import the default version (currently 2.x). You can use 1.x by running a cell with the tensorflow_version magic before you run import tensorflow.
[ ]
%tensorflow_version 1.x
TensorFlow 1.x selected.
more detail :
https://colab.research.google.com/notebooks/tensorflow_version.ipynb#scrollTo=NeWVBhf1VxlH
You seem to have TensorFlow 2.x while the bert module uses TensorFlow 1.x. You can verify here that Tensorflow 1.x has the tf.train.Optimizer module while according to this, Tensorflow 2.x has no such module.
Make sure you install the Tensorflow version that bert requires
I am about to learn about Neural Networks and I am about to reproduce a tutorial which trains a Neural Network with the target to identify handwritten letters. The training of the Neural Network should be done with the MNIST data set. Unfortunately, exactly where my issue comes as I am not able to read in the MNIST data set.
The environment I am using is a Jupyter Notebook and Python 3.
These are the lines of code I have (line 2 causes the issue):
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
Line 2 causes this error message:
ModuleNotFoundError: No module named 'tensorflow.contrib'
Ok, what the error tells me, is clear. Reason is, that in my tensorflow installation folder a directory /tensorflow/contrib/... does not exist.
The issues is caused by line 2, as the module input_data.py contains this line of code:
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
So, the core of my issue is, that I do not know, where to get the module read_data_sets from. I was searching at GitHub, but the path
/tensorflow/contrib/learn/python/learn/datasets/mnist/
does not exist there.
In detail: Subfolder 'mnist' is not to be found in GitHub. Therefore, I also do not find the file read_data_sets.py.
So, where do I find the missing module 'read_data_sets'?
Would be great, if someone could help me as this issue stops my attempt to deal with Neural Networks already at the very beginning.
Thanks a lot and kind regards,
Matthias
It seems that you are using a new version of tensorflow >= 1.13.0 so you may follow this link if you want to load MNIST dataset
I am new to Tensorflow and TFLearn and when I was following some tutorials I found the tool Projector https://www.youtube.com/watch?v=eBbEDRsCmv4&t=629s. I was trying to use it with TFLearn but I couldn't found any example in the internet and the documentation in the Tensorflow page is not the very intuitive https://www.tensorflow.org/programmers_guide/embedding. Can somebody help me with a proper example that integrate TFLearn and projector.