If I run
ipython notebook
in terminal
it reports
Could not start notebook. Please install ipython-notebook
But I am sure the notebook is indeed install by
conda install ipython-notebook
because
conda install ipython-notebook
gives me
Fetching package metadata: ..
Solving package specifications: .
# All requested packages already installed.
# packages in environment at /home/a/anaconda:
#
ipython-notebook 2.3.1 py27_0
so I guess the command line bunded to a wrong location.
So how can I figure out which binary or directory the command line pointed to?
I am not terribly familar with conda, but looking at the description tells me it is some soft of package management tool. One of its strengths, like the virtualenv package, is that you can have completely different environments (installations) of python packages. This allows you to have a separate set of packages for different requirements, for example.
One drawback is that the different environments need to be activated so that the packages contained therein can be used.
For conda and your particular case, it seems that:
cd ~
source activate anaconda
Will activate the environment stored in $HOME/anaconda/.
Not that conda tells you where the environment is stored:
Fetching package metadata: ..
Solving package specifications: .
# All requested packages already installed.
# packages in environment at /home/a/anaconda:
#
ipython-notebook 2.3.1 py27_0
Related
I want to install python packages listed in the requirements file of my github repo. However, I have problems installing those python packages into my conda environment.
First of all, I installed conda with Miniforge3-MacOSX-arm64 which supports the M1 with arm64 architecture. However, some specific python packages like onnxruntime I wasn't able to install, because I encountered error messages like that:
ERROR: Could not find a version that satisfies the requirement onnxruntime
ERROR: No matching distribution found for onnxruntime
I assumed that for those specific python packages there is no support yet for the M1.
Therefore, I pursued another approach. I set the settings of Terminal to "Open with Rosetta". The plan is to install the applications of the intel x86_64 architecture and let Rossetta create the binaries to let run on arm64. Then I uninstalled miniforge for arm64 and installed miniforge for x86_64 named Miniforge3-MacOSX-x86_64. With that setup I was able to install all listed python packages of the requirement file and with pip freeze I can also confirm that they have been installed. However, I am somehow not able to use those python packages. For instance if I want to run pytest I get the following error:
zsh: illegal hardware instruction pytest
I assumed Rossetta takes care of that, that I can use applications for x86_64 also on arm64. But somehow it doesn't work. I tried a lot of different things and am out of ideas.
Does anyone know what the problem is? I would be also thankful for advice and suggestions how to properly set up a python environment on Mac M1.
I had the same problem back in 2days ago, I'm using m1 pro. I was trying to install the python packages only using pip but I got a numbers of errors, then I decided to install with conda.
In my case it worked, here is what I've done so far is:
First Enable the open with rosetta in your zsh.
And then,
# create environment in conda
conda create -n venv python=3.8 # with your python version
# activate
conda activate venv
and visit the conda website to look for the packages:
check packages
For suppose if you are looking for pytest packages then you can search it, and you'll get a result like this, with the available package and channel.
You need to enable that specific channel to get that package with this command:
# config channel
conda config --append channels conda-forge # available channel name
# then install
conda install --yes --file requirements.txt
Make sure, your have the same version of pytest in your requirements.txt file. (eg:pytest==6.2.5)
Hope this should work, if not try to install it with pip like:
pip install -r requirements.txt
after environment enable.
I am developing a simple python package (on macOS 10.14) and I have problems in setting the instructions how to install it. Right now the package is not available anywhere yet, but think of it as a dummy "hello world" package with a dummy "hello world" function inside a dummy "hello world" module. Of course it has a proper setup.py script that would allow users to install and uninstall the package with pip.
When I install and test it myself everything works fine, the problem is not related to the package itself.
The issue is that I cannot make conda virtual environments and pip work together... Next to my setup.py script there is a environment.yaml file that specifies the dependancies required for my package. Based on this file I create a virtual environment with:
conda env create --prefix ENV -f environment.yaml
I have conda 4.7.12 with Python 3.7.3 inside so the virtual environment has it's own pip. So I activate the virtual environment and explicitly call the pip inside to install my package in the virtual environment:
/Users/my_name/Desktop/dev_dir/ENV/bin/pip install . --user
The installation is successful and the package can be imported. However when I deactivate the virtual environment with conda deactivate and run python interpreter from the conda base environment (version 3.6.9) I can still load my package! For some reason it is available outside of that particular virtual environment...
Later, when I run the 'inner' pip from conda base shell:
/Users/my_name/Desktop/dev_dir/ENV/bin/pip uninstall pkg
The removal seems to go through as well. I get a message:
Uninstalling pkg-0.0.0.9000:
Would remove:
/Users/my_name/.local/lib/python3.7/site-packages/pkg-0.0.0.9000.dist-info/*
/Users/my_name/.local/lib/python3.7/site-packages/pkg/*
Proceed (y/n)? y
Successfully uninstalled pkg-0.0.0.9000
suggesting that the package was indeed installed in a directory .local, outside conda virtual environments.
And the best for the last: even after this uninstallation when I run python interpreters (regardless of which environment from) and I try to import pkg it still works! when I then type pkg in the interpreter I get the path to my development directory:
>>> import pkg
>>> pkg
<module 'pkg' from '/Users/my_name/Desktop/dev_dir/pkg/__init__.py'>
Could someone please help me disentangle this mess? I would like to have my package installed inside the virtual environment, nicely isolated. And also - it should be gone after uninstallation, right?
PS. PYTHONPATH variable is never set anywhere at any stage, I have been checking that...
when I then type pkg in the interpreter I get the path to my development directory
This can only happen if:
You modified your PYTHONPATH to include /Users/my_name/Desktop/dev_dir which you didn't do
You are running the interpreter while you are in the folder /Users/my_name/Desktop/dev_dir, seems likely as you called it your development folder.
Check the output of print(sys.path), which lists all directories that are searched when doing import (standard locations + PYTHONPATH) and also print(os.getcwd()) as the current working directory is also searched
You tried installing your package to your activated conda environment using
/Users/my_name/Desktop/dev_dir/ENV/bin/pip install . --user
Looking at [the docs](https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-user] however:
--user
Install to the Python user install directory for your platform. Typically ~/.local/
So the --user option is messing with your intention to install into the currently active environment. But pip actually does that by default when run inside a virtual environment. So simply do:
conda activate <your envname>
pip install .
#FlyingTeller already correctly identified the issue. I just wanted to point out that you could further streamline your process by adding the installation for your package into your YAML definition. For example,
name: my_env
channels:
- defaults
dependencies:
- python=3.7.3
- pip
- pip:
- -e /Users/my_name/Desktop/dev_dir/pkg
This is also further in line with the best practices (see "Using Pip in a Conda Environment").
Just wanted to hopefully clear some up by telling you this keeps happen to many and if you forget the rule that is NO root install with conda, all rules for your files might change and suddenly it keeps asking for sudo AND fails. Conda = NO SUDO! Hope you got it fixed!
You have to add the pip package to your environment (see https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html), otherwise the packages will be installed by the global pip installation such that those packages can be accessed by all environments.
Therefore, create an environment using
conda create --name exampleenv pip
instead of
conda create --name exampleenv
I'm working on packaging up a suite of tools that can be installed in different environments, and I've run into many problems with dependencies, which are an issue since this package will be installed in air-gapped environments.
The package will be installed via Anaconda, and I have provided the installation script. In order to create the package, I ran the following command:
conda metapackage toolkit_bundle 0.0.1 --dependencies r-essentials tensorflow gensim spacy r-ggplot2 r-plotly r-dplyr r-rjson r-tm r-reshape2 r-shiny r-sparklyr r-slam r-nlp r-cluster r-ggvis r-plyr r-tidyr r-zoo r-magrittr r-xtable r-htmlwidgets r-formattable r-highcharter --summary "Toolkit Bundle"
This produced a tar.bzip2 file that I held on to and tried to install via the conda command
conda install toolkit_bundle.tar.bz2
The command seemed to run successfully, but I was unsuccessful in importing the modules in Python. I also tried creating a virtual conda environment and importing the package.
conda create -n myenv toolkit_bundle-0.0.1.tar.bz2
There was no error, but none of the modules were able to be imported either.
Am I missing a step in this process, or is my thought process flawed?
Update:
It looks like my thinking was pretty flawed. A quick skim of the conda metapackage command documentation revealed the following:
Tool for building conda metapackages. A metapackage is a package with no files, only metadata. They are typically used to collect several packages together into a single package via dependencies.
So my initial understanding was incorrect, and the package only contains metadata. Are there any other ideas for creating packages with dependencies resolved that can be installed in an air gapped environment?
I think you want to look at the command conda build for making packages, which just requires writing an appropriate meta.yaml file containing the dependencies, along with some other build parameters. There is good documentation for doing so on the conda website: https://conda.io/docs/user-guide/tasks/build-packages and there is a repo of examples.
If you have a working PIP package, you can also auto-generate a conda package recipe using conda skeleton.
Once you have built a set of packages locally, you can use the --use-local option to conda install to install from your local repo, with no need for an internet connection (as long as the packages for all the dependencies are in your local repo).
I was able to download the packages I needed via the pypi website, and after determining the dependencies, I manually downloaded them and wrote a script to install them in the required order.
I am trying to install a new conda environment that will be totally separate from my other environments, so I run:
conda create --name foot35 python=3.5
Anaconda then asks for my approval to install these NEW packages:
asn1crypto: 0.22.0-py35he3634b9_1
ca-certificates: 2017.08.26-h94faf87_0
cachecontrol: 0.12.3-py35h3f82863_0
certifi: 2017.7.27.1-py35hbab57cd_0
cffi: 1.10.0-py35h4132a7f_1
chardet: 3.0.4-py35h177e1b7_1
colorama: 0.3.9-py35h32a752f_0
cryptography: 2.0.3-py35h67a4558_1
distlib: 0.2.5-py35h12c42d7_0
html5lib: 0.999999999-py35h79d4e7f_0
idna: 2.6-py35h8dcb9ae_1
lockfile: 0.12.2-py35h667c6d9_0
msgpack-python: 0.4.8-py35hdef45cb_0
openssl: 1.0.2l-vc14hcac20b0_2 [vc14]
packaging: 16.8-py35h5fb721f_1
pip: 9.0.1-py35h69293b5_3
progress: 1.3-py35ha84af61_0
pycparser: 2.18-py35h15a15da_1
pyopenssl: 17.2.0-py35hea705d1_0
pyparsing: 2.2.0-py35hcabcaab_1
pysocks: 1.6.7-py35hb30ac0d_1
python: 3.5.4-hedc2606_15
requests: 2.18.4-py35h54a615f_1
setuptools: 36.5.0-py35h21a22e4_0
six: 1.10.0-py35h06cf344_1
urllib3: 1.22-py35h8cc84eb_0
vc: 14-h2379b0c_1
vs2015_runtime: 14.0.25123-hd4c4e62_1
webencodings: 0.5.1-py35h5d527fb_1
wheel: 0.29.0-py35hdbcb6e6_1
win_inet_pton: 1.0.1-py35hbef1270_1
wincertstore: 0.2-py35hfebbdb8_0
I don't know why it suggests these specific ones. I looked up lockfile and its website says:
Note: This package is deprecated.
Here is a screenshot of my command prompt as additional information.
I am trying to do a clean install that is unrelated/independent to the root environment.
Why is conda trying to install these things and how do I fix it?
conda create will "Create a new conda environment from a list of specified packages." ( https://conda.io/docs/commands/conda-create.html )
What list??!? The .condarc file is the conda configuration file.
https://conda.io/docs/user-guide/configuration/use-condarc.html#overview
The .condarc file can change many parameters, including:
Where conda looks for packages.
If and how conda uses a proxy server.
Where conda lists known environments.
Whether to update the bash prompt with the current activated environment name.
Whether user-built packages should be uploaded to Anaconda.org.
**Default packages or features to include in new environments.**
Additionally, if you ever typed conda config, even accidentally...
The .condarc file is not included by default, but it is automatically created in your home directory the first time you run the conda config command.
A .condarc file may also be located in the root environment, in which case it overrides any in the home directory.
If you would like a single clean env then Boshika's recommendation of --no-default-packages flag for an instance though, you can check and modify the default packages for all further envs. ( https://conda.io/docs/user-guide/configuration/use-condarc.html#always-add-packages-by-default-create-default-packages )
Always add packages by default (create_default_packages)
When creating new environments, add the specified packages by default. The default packages are installed in every environment you create. You can override this option at the command prompt with the --no-default-packages flag. The default is to not include any packages.
EXAMPLE:
create_default_packages:
- pip
- ipython
- scipy=0.15.0
Lockfile may be there due to legacy requirements across all operating systems. Hopefully, you have the tools to remove it if you choose.
To avoid conda from installing all default packages you can try this
conda create --name foot35 --no-deps python=3.5
please don't loose the hope it's very weird for me also.
What you have to do just follow the steps: -
1.Download the anaconda for you system from it's official site and Install it : https://repo.continuum.io
After the Installation process, you can select your own package from there and please don't need to download anything from anywhere, it's full of packages over the internet.
3.If you want to work on python download Syder IDE its very useful for the Machine learning library.
Don't create other environment instead of root by defaults otherwise you have to duplicate all the file again, if there is any error while installing in root so close the window and again run as administration and after that its works fine.
Cause all the file in your root environment so you don't worry about the path in future and you can install and uninstall the packages : like - numpy , pandas, tensorflow and its gpu , scikit-learn etc from there eaisly.
Thank you
These packages are generally useful if you wish to pip install ... anything. Without many of them doing a pip install requests could result in errors such as these (and more)
No Module named Setuptools
pip: command not found
pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available
The issue that the conda create ... exposes is that the packages it wants to pull down are variable (based on when you create the environment). If you wish to maintain the same environment for you and for those who may collaborate with you, then freezing or pinning conda create's default installed package may be necessary.
One way of doing this is creating your environment with conda env create using a conda environment YAML file such as this example:
dependencies:
- ca-certificates=2018.03.07
- certifi=2018.4.16
- libedit=3.1.20170329
- libffi=3.2.1
- ncurses=6.1
- openssl=1.0.2o
- pip=10.0.1
- python=3.6.6
- readline=7.0
- setuptools=40.0.0
- sqlite=3.24.0
- tk=8.6.7
- wheel=0.31.1
- xz=5.2.4
- zlib=1.2.11
conda env create -n <NAME_OF_ENVIRONMENT> -f <PATH_TO_CONDA_REQUIREMENTS_FILE>
(note it's conda env create not conda create)
I am working on a project where they are using Ansible to run several conda installs. I need to install two additional packages from github that have dependencies that are already covered by the existing conda installs with the second package having a dependency on the first.
Using the Ansible code below, I can get the first package to install without reinstalling the dependencies.
- name: install mypackage
shell: /home/myname/envs/myproject/bin/pip install --install-option="--prefix=/home/myname/envs/myproject" --egg https://github.com/myname/mypackage/archive/my_branch.zip
This gets me 95% of the way there, however, when I try to install the second package, it doesn't recognize the first package as having been installed and fails.
I am new to this and I have been throwing things up against the wall but I'm not able to install the first package in such a way where:
It recognizes the existing conda installs
The second package identifies the first one
From what I can understand from your task you are using a venv to install the packages, that's good. I don't understand why, though, you are using the shell module to handle the install.. This not good.
You can handle all this with ansible' pip module :
- name: "Install mypackage"
pip:
virtualenv: /home/{{ lookup('env','USER') }}/envs/myproject/
name: "{{ item }}"
with_items:
- "https://github.com/myname/mypackage1/archive/my_branch.zip"
- "https://github.com/myname/mypackage2/archive/my_branch.zip"
This should install correctly the packages in the order you require, without the hassle of having to work your way through shell output.
Note that you can mix normal python packages with eggs etc..
As an alternative to virtualenv you can use executable.
Have a look at the docs
I believe the question is how to use ansible to pip install packages within a conda environment. Noting that it is perfectly possible to use pip install within a conda environment, which is particularly useful in cases where the desired package does not exist on the conda repositories and cannot be installed with conda install.
The goal is thus to use the environment created by conda, and not a virtualenv (for which, btw, ansible's pip module provides specific parameters).
I have managed to do so by using ansible's pip module and pointing the pip executable to the one installed within the desired conda environment.
See code below, notice usage of the executable variable:
- name: Install pip packages WITHIN a designated conda environment
pip:
name: some_package_name
executable: "/home/[username]/[anaconda3]/envs/[conda_env_name]/bin/pip"
# ^-- Of course you will need to ensure the correct path.
This will pip install the packages inside the designated conda environment.