I am trying to use Keras to develop a Neural Network in Python, after managing to install on my Windows 10 Workstation Anaconda3 (with all its libraries: numpy, scikit-learn, pandas, SciPy and matplotlib), I realized to need TensorFlow or Theano, too.
After I failed intalling TensorFlow, I downloaded and was able to install Theano, but trying to import it from the Python prompt, I received the following:
WARNING: "g ++ not detected! Theano will be unable to execute optimized C implementations (for both CPU and GPU) and will default to Python implementations. Performance will be several degraded. To remove this warning, set Theano flags cxx to an empty string"
Hoping in this way to solve the problem, I downloaded the GNU compiler for C++ Cygwin64, but nothing has changed, at all! Acknowledge that this is really the right way to move forward, how should I access the "Theano flags cxx"?
first, its only performance issue to run theano without g++. it a warning and not exception when importing it.
BUT probably you want performance when using deep learning lib like keras so lets try fix the theano installation.
please follow the theano docs about installing theano on windows. you might want to clean previous installation of requirements.
to install the gcc follow this section which says:
Theano C code compiler currently requires a GCC installation. We have
used the build TDM GCC which is provided for both 32- and 64-bit
platforms...
download from here follow the installation instruction.
Tensorflow
I recommending working with tensorflow as keras recently changed the default backend from theano to tensorflow.
using anaconda and pip you should easily do pip install tensorflow and it will work.
actually today I just installed keras and tensorflow on windows 10 using anaconda by just running pip install keras tensorflow so I suggest you try fresh clean installation of anaconda and python and try this again.
please update if you succeed or having another issues installing theano / tensorflow / keras
Related
I've recently began using an Apple Silicon mac. I installed Tensorflow through Anaconda, version 2.6.2, which was the latest version I could find.
When I run the training code, the training seems to begin initializing, until it reaches some memory error. Then it hangs until I manually stop it.
The printed output looks like:
(machine_learning) eric#mac-mini cr_battle_predictor % python3 main.py
2021-12-25 22:11:24.286059: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
0%| | 0/350 [00:00<?, ?epoch/s]2021-12-25 22:11:24.368779: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
python3(48188,0x304135000) malloc: Incorrect checksum for freed object 0x7fe41cac4e80: probably modified after being freed.
Corrupt value: 0x7fe42c07e480
python3(48188,0x304135000) malloc: *** set a breakpoint in malloc_error_break to debug
zsh: abort python3 main.py
(machine_learning) eric#mac-mini cr_battle_predictor % /Users/eric/.conda/envs/machine_learning/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
For some reason, very occasionally, (around 1/7 attempts), the error does not appear and as far as I can tell, the training progresses normally, without trouble.
I acknowledge that a similar problem has been asked here. However, the only solution provided was to ensure that I was using the correct interpreter, and the most recent version of tensorflow. I am using Python 3.9.7 and TensorFlow 2.6.2, and I made sure that my program was using these versions too.
What causes this problem? I am willing to share any needed information.
Installing Tensorflow on Mac M1 is a real pain. My solution to your problem is to restart installing Tensorflow; I faced the same issue as you and was unable to fix it. First off, I'm going to assume that you are on Monterey (Mac 12); If you aren't, you'll have to refer to https://github.com/apple/tensorflow_macos/issues/153 which seems to have worked for some people.
If that doesn't work, upgrade to Monetery, and follow the steps outlined here: https://developer.apple.com/metal/tensorflow-plugin/. Here it is:
Download and install Conda env [you can get this from https://github.com/conda-forge/miniforge#miniforge3; download "arm64 (Apple Silicon)", because you run on M1]:
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
Install the TensorFlow dependencies:
conda install -c apple tensorflow-deps
Then install base Tensorflow
python -m pip install tensorflow-macos
Finally get the Tensorflow Metal plugin
python -m pip install tensorflow-metal
Tensorflow should now work (and use the M1 GPU too!).
I recently found an article that indicates that the conventional methods for downloading python machine learning modules such as tensorflow and keras are not optimized for computers with a cpu. How can I configure tensorflow and keras to make it most compatible with my processor on MacOSX in python 2.7?
If it helps, I use pycharm to download most of my libraries and for my coding interface.
For any environment if you want to install tensorflow, you can simply run this command :
pip install tensorflow (for CPU, python2.7)
pip3 install tensorflow (for CPU, python3)
You need to mention externally if you want to install tensorflow with GPU like this:-
pip install --upgrade tensorflow-gpu
but for GPU you will need CUDA (NVDIA graphics) to run.
and very same way, you can install keras Where you dont have to pass keras-gpu externally while using command:-
pip install keras
I think what you read meant that tensorflow programs work much faster if your computer has a GPU. You need a Nvidia GPU in your computer to install tensorflow with GPU support on your Mac and as far as I know, after version 1.2 tensorflow no longer provides GPU support for MacOS
I installed tensorflow 1.4.0 with pip3. (Windows)
I'm trying to use cv2.dnn.readNetFromTensorflow with a retrained Inception V3 graph.
Unfortunately it seems cv2 does not support retrained graphs so I went through transform graph.pb into one usable by cv2, but unfortunately I can't find no transform_graph in graph_transforms in tensorflow.
Should I install tensorflow differently?
You have to Build it first.
bazel build tensorflow/tools/graph_transforms:transform_graph
Note that it will not work if you're using a Opencv version below 3.3.1 and even than my graphs are not very accurate after loading.
Answer to edit:
yes that script has to be build with bazel and is not in your normal download.
I'm having some misinformation problem regarding Tensorflow. Lot's of info on lot's of places, and never complete enough.
I got my system set up with CUDA 8.0, cuDNN and I have Keras + Theano working ok with python 2.7. I'm trying to move to Tensorflow.
As I had compatibility problems with numpy and other stuff when I tried to install it in the same environment, I installed miniconda2, created a virtual env for it conda create -n tensorflow pip and activated it, as instructed here: https://www.tensorflow.org/install/install_linux#InstallingAnaconda
The environment seems operational.
Afterwards, I installed tensorflow from https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp27-none-linux_x86_64.whl and also Keras, only to noticed I had some modules duplicated on conda list, some marked with a version string, others marked with <pip> only. Specially, I got one Tensorflow-gpu 1.2.1 and Tensorflow 1.1.0. Both of them. The old version just comes by with Keras.
Also, there's a myriad of warnings about Tensorflow not being compiled to use certain CPU instruction sets, and there's this answer How to compile Tensorflow with SSE4.2 and AVX instructions? about compiling it with using basel, but I don't really find any information about where to put the source code and what files to move to where after running that bazel command line.
To make matters worse, whenever I run a simple 20x20 matrix multiplication code with "/gpu:0" as device, the code list that horrendous warnings, correctly detects the presence of a GTX 1070, but never really confirms it was used to to the calculations. And it runs faster on "/cpu:0". How I miss Theano...
Could someone point me out where can I find:
what version to download of Tensorflow that is current (not necessarily latest)?
concise steps to get it done and how to test if those steps went right?
I'm using Linux Mint 18.
I have used conda and have installed Tensorflow=1.1.0, but it never seemed to have worked correctly within python. I also came across in github issues that anconda are currently working on the Tensorflow GPU version and so no matter what I tried in Anaconda, it never used my Tesla NVIDIA P100-SXM2-16GB card and it used only the CPU.
I suggest you use the normal environment till they get Tensorflow-gpu to work right in Anaconda.
To check if the tensorflow-gpu works I used the Inception v3 model with TF0.12 / TF1.0.
This is the process that I go through to install tensorflow1.0:
Step 0.
sudo -i
apt-get install aptitude
aptitude install software-properties-common
apt-get install libcupti-dev pip
apt-get update
apt-get upgrade libc6
Step 1. Install Nvidia Components. I think you already have that installed
Download the NVIDIA cuDNN 5.1 for CUDA 8.0 from
https://developer.nvidia.com/rdp/cudnn-download
(Registration in NVIDIA's Accelerated Computing Developer Program is required)
Cudnn 5.1 works well with most of the architectures and OS out there
Step 2. Install bazel and tensorflow
apt-get install bazel
you can go to this link https://pypi.python.org/pypi/tensorflow-gpu/1.1.0rc0 and do a
pip install <python-wheel-version>
If you have python2.7 and python 3.* installed, then use pip2 to install for python2.7
Step 3. Install openjdk
apt-get install openjdk-8-jdk
Step 4. git clone the Inception model code
git clone https://github.com/tensorflow/models.git
cd models
git checkout master
cd inception
This is where bazel comes in the picture. See Bazel's Getting Started docs for a more detailed explanation of what a target is. So, if you do a
ls -lstr
you might see 5 bazel related symbolic links
bazel-bin bazel-genfiles bazel-inception bazel-out bazel-testlogs
these are the target directory to which you build your specific model
Assuming you're in the models/inception directory
bazel build inception/imagenet_train
This activates the symbolic link
NOTE: For this imagenet_train.py to work you need to prepare the imagenet dataset. You either skip this part or go through this:
STEP 5. Prepare the Imagenet dataset
Before you run the training script for the first time, you will need to download and convert the ImageNet data to native TFRecord format.
To begin, you will need to sign up for an account with ImageNet to gain access to the data. Look for the sign-up page, create an account and request an access key to download the data.
After you have USERNAME and PASSWORD, you are ready to run our script. Make sure that your hard disk has at least 500 GB of free space for downloading and storing the data. Here we select DATA_DIR=$HOME/imagenet-data as such a location but feel free to edit accordingly.
When you run the below script, please enter USERNAME and PASSWORD when prompted. This will occur at the very beginning. Once these values are entered, you will not need to interact with the script again.
#location of where to place the ImageNet data
DATA_DIR=$HOME/imagenet-data
Here $HOME is /root
# build the preprocessing script.
bazel build inception/download_and_preprocess_imagenet
# run it
bazel-bin/inception/download_and_preprocess_imagenet "${DATA_DIR}"
# Place the tensor records at /root/dataset
Step 6. Source bazel and tensorflow
This step is very important. This will activate the python packages and I think you maybe getting errors because the python package for tensorflow is not activated.
If you have skipped step 5 then you might want to go to
/models/inception/sample
and run the gpu.py script
python gpu.py
This should verify that your tensorflow version works with your gpu
source /opt/DL/bazel/bin/bazel-activate
source /opt/DL/tensorflow/bin/tensorflow-activate
You also check by importing tensorflow into python
eg:
import tensorflow as tf
find a hello world eg on their site and if this gives errors then it has not been installed properly
Step 7. Run the imagenet training --You can skip this step if you have skipped step 5.
bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=256 --train_dir=/tmp --data_dir=/root/dataset/ --max_steps=100
I want to implement this example. And thus I need to install python along with some libraries including Scikit-Learn, Numpy, Scipy, matplotlib.pyplot, Pandas, Keras, TensorFlow on my Windows 10 machine.
Currently, I can not use my GPU with TensorFlow. I tried installing CUDA. But still having difficulties setting path variables for python. I also tried installing Tensorflow with Anaconda. But that didn't help.
May I get a suggestion on installing python and its machine-learning packages on Windows with Nvidia GPU support in a fashion that doesn't have dependency issues?
Install python 3.6. Then use pip to install those packages. pip should be bundled with your Python install.
Anaconda has caused me many issues on windows personally. Try to avoid it if possible in my opinion.
I also install tensorflow recently, I find it's very smoothly. As you have installed vs2015, you can then install cuda. When you install the latest cuda, it will config the environment path and config for vs2015 automatically. After this, install python3.5, then use pip install tensorflow. Then you can run the tesorflow demo, when you encounter path issue, you just add it to the path, I remeber there are very file path issue. And when you use python, sometime you'll encounter module not find. Then just install these modules with pip.