I'm trying to understand how nix works. For that purposed I tried to create a simple environment to run jupyter notebooks.
When I run the command:
nix-shell -p "\
with import <nixpkgs> {};\
python35.withPackages (ps: [\
ps.numpy\
ps.toolz\
ps.jupyter\
])\
"
I get what I expected -- a shell in an environment with python and the all packages listed installed, and the all expected commands accessible in the path:
[nix-shell:~/dev/hurricanes]$ which python
/nix/store/5scsbf8z3jnz8ardch86mhr8xcyc8jr2-python3-3.5.3-env/bin/python
[nix-shell:~/dev/hurricanes]$ which jupyter
/nix/store/5scsbf8z3jnz8ardch86mhr8xcyc8jr2-python3-3.5.3-env/bin/jupyter
[nix-shell:~/dev/hurricanes]$ jupyter notebook
[I 22:12:26.191 NotebookApp] Serving notebooks from local directory: /home/calsaverini/dev/hurricanes
[I 22:12:26.191 NotebookApp] 0 active kernels
[I 22:12:26.191 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/?token=7424791f6788af34f4c2616490b84f0d18353a4d4e60b2b5
So, I created a new folder with a single default.nix file with the following contents:
with import <nixpkgs> {};
python35.withPackages (ps: [
ps.numpy
ps.toolz
ps.jupyter
])
When I run nix-shell in this folder though, it seems like everything is installed but the PATHs are not set:
[nix-shell:~/dev/hurricanes]$ which python
/usr/bin/python
[nix-shell:~/dev/hurricanes]$ which jupyter
[nix-shell:~/dev/hurricanes]$ jupyter
The program 'jupyter' is currently not installed. You can install it by typing:
sudo apt install jupyter-core
By what I read here I was expecting the two situations to be equivalent. What did I do wrong?
Your default.nix file is supposed to hold the information to build a derivation when calling it with nix-build. When calling it with nix-shell, it just sets the shell in a way that the derivation is buildable. In particular, it sets the PATH variable to contain everything that is listed in the buildInput attribute:
with import <nixpkgs> {};
stdenv.mkDerivation {
name = "my-env";
# src = ./.;
buildInputs =
python35.withPackages (ps: [
ps.numpy
ps.toolz
ps.jupyter
]);
}
Here, I've commented out the src attribute which is required if you want to run nix-build but isn't necessary when your are just running nix-shell.
In your last sentence, I suppose you are referring more precisely to this:
https://github.com/NixOS/nixpkgs/blob/master/doc/languages-frameworks/python.section.md#load-environment-from-nix-expression
I don't understand this advice: to me it just looks plain false.
Related
I'm new to working with azure functions and tried to work out a small example locally, using VS Code with the Azure Functions extension.
Example:
# First party libraries
import logging
# Third party libraries
import numpy as np
from azure.functions import HttpResponse, HttpRequest
def main(req: HttpRequest) -> HttpResponse:
seed = req.params.get('seed')
if not seed:
try:
body = req.get_json()
except ValueError:
pass
else:
seed = body.get('seed')
if seed:
np.random.seed(seed=int(seed))
r_int = np.random.randint(0, 100)
logging.info(r_int)
return HttpResponse(
"Random Number: " f"{str(r_int)}", status_code=200
)
else:
return HttpResponse(
"Insert seed to generate a number",
status_code=200
)
When numpy is installed globally this code works fine. If I install it only in the virtual environment, however, I get the following error:
*Worker failed to function id 1739ddcd-d6ad-421d-9470-327681ca1e69.
[15-Jul-20 1:31:39 PM] Result: Failure
Exception: ModuleNotFoundError: No module named 'numpy'. Troubleshooting Guide: https://aka.ms/functions-modulenotfound*
I checked multiple times that numpy is installed in the virtual environment, and the environment is also specified in the .vscode/settings.json file.
pip freeze of the virtualenv "worker_venv":
$ pip freeze
azure-functions==1.3.0
flake8==3.8.3
importlib-metadata==1.7.0
mccabe==0.6.1
numpy==1.19.0
pycodestyle==2.6.0
pyflakes==2.2.0
zipp==3.1.0
.vscode/settings.json file:
{
"azureFunctions.deploySubpath": ".",
"azureFunctions.scmDoBuildDuringDeployment": true,
"azureFunctions.pythonVenv": "worker_venv",
"azureFunctions.projectLanguage": "Python",
"azureFunctions.projectRuntime": "~2",
"debug.internalConsoleOptions": "neverOpen"
}
I tried to find something in the documentation, but found nothing specific regarding the virtual environment. I don't know if I'm missing something?
EDIT: I'm on a Windows 10 machine btw
EDIT: I included the folder structure of my project in the image below
EDIT: Added the content of the virtual environment Lib folder in the image below
EDIT: Added a screenshot of the terminal using the pip install numpy command below
EDIT: Created a new project with a new virtual env and reinstalled numpy, screenshot below, problem still persists.
EDIT: Added the launch.json code below
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Python Functions",
"type": "python",
"request": "attach",
"port": 9091,
"preLaunchTask": "func: host start"
}
]
}
SOLVED
So the problem was neither with python, nor with VS Code. The problem was that the execution policy on my machine (new laptop) was set to restricted and therefore the .venv\Scripts\Activate.ps1 script could not be run.
To resolve this problem, just open powershell with admin rights and and run set-executionpolicy remotesigned. Restart VS Code and all should work fine
I didn't saw the error, due to the many logging in the terminal that happens
when you start azure. I'll mark the answer of #HuryShen as correct, because the comments got me to the solution. Thank all of you guys
For this problem, I'm not clear if you met the error when run it locally or on azure cloud. So provide both suggestions for these two situation.
1. If the error shows when you run the function on azure, you may not have installed the modules success. When deploy the function from local to azure, you need to add the module to requirements.txt(as Anatoli mentioned in comment). You can generate the requirements.txt automatically by the command below:
pip freeze > requirements.txt
After that, we can find the numpy==1.19.0 exist in requirements.txt.
Now, deploy the function from local to azure by the command below, it will install the modules success on azure and work fine on azure.
func azure functionapp publish <your function app name> --build remote
2. If the error shows when you run the function locally. Since you provided the modules installed in worker_venv, it seems you have installed numpy module success. I also test it in my side locally, install numpy and it works fine. So I think you can check if your virtual environment(worker_venv) exist in the correct location. Below is my function structure in local VS code, please check if your virtual environment locates in the same location with mine.
-----Update------
Run the command to to set execution policy and then activate the virtual environment:
set-executionpolicy remotesigned
.venv\Scripts\Activate.ps1
I could solve my issue uninstalling python3 (see here for a guide https://stackoverflow.com/a/60318668/11986067).
After starting the app functions via F5 or func start, the following output was shown:
This version was incorrect. I have chosen python 3.7.0 when creating the project in the Azure extension. After deleting this python3 version, the correct version was shown and the Import issue was solved:
I created my own python packages that contains some script. It can be installed with
python setup.py install
Under Linux I can then run the script when the correct conda environment is loaded with:
my_script.py
Under Windows the script is not found although I specify it twice in the setup.py file:
config = {
...
'version': '0.0.25',
'scripts': ['scripts/my_script.py', 'scripts/my_script.bat'],
'packages': ['my_package'],
'name': 'ms_package',
'data_files': [('scripts', ['scripts/my_script.py', 'scripts/my_script.bat']),
('static', ['static/my_data.csv'])]
}
The script starts with a shebang:
#!/usr/env python
The file is found, but when I run it I get the message that this file has not app associated. How to set this up correctly under Windows? Do I need a second file that specifies an app?
I added the my_script.bat that loads the anaconda environment, but then fails with the same error. And when I put
python my_script.py
in there, it does not find my_script. So, I would have to do something equivalent to
python `which my_script.py`
Is there a functionality like this on Windows (Windows 10)?
I would like to have an icon for the user to click on. For people that don't know how to use the command line.
This is a very basic question on how to code in python and run your script from a very beginner.
I'm writing a script using Xcode9.4.1 which is supposed to be for python3.6. I then have an sh script run.sh, in the same folder of the script (say "my_folder") which simply looks like
python my_script.py
The python script looks like
from tick.base import TimeFunction
import numpy as np
import matplotlib.pyplot as plt
v = np.arange(0., 10., 1.)
f_v = v + 1
u = TimeFunction((v, f_v))
plt.plot(v, u.value(v))
print('donne!\n')
But as I try to run my_script.sh from the terminal I get a "ImportError: No module named tick.base" error.
But the tick folder is actually present in "my_computer/anaconda3/lib/python3.6/site-packages" and up to last week I was using Spyder from anaconda navigator and everything was correctly working, so no "import error" occurred.
The question is quite trivial, in some sense it simply is "what's the typical procedure to code and run python script and how modules are supposed to be imported-downloaded when running on a given machine?"
I need it since my script is to be run on another machine through ssh and using my laptop to make some attempts. Up to last year I used to work in C and only need to move some folders with code and .h files.
Thank for help!
EDIT 1:
From the Spyder 3.2.7 setting, where the script was giving non problem, I printed the
import sys
print(sys.path)
The -manually- copied the content to the sys.path variable in my_script.py and rerun 'run.sh' and now getting a new (strange) error:
Traceback (most recent call last):
[...]
File "/Users/my_computer/anaconda3/lib/python3.6/site-packages/tick/array/build/array.py", line 106
def tick_double_array_to_file(_file: 'std::string', array: 'ArrayDouble const &') -> "void":
^
SyntaxError: invalid syntax
First, check the python which you are calling the script with is pointing to the anaconda python and it is of the same version you are expecting it to be. You can do "which python" command in Linux and Mac to which the path which points to python. It if is pointing to some different version or build of python than the one which you are expecting then add the needed path to the system environment PATH variable. In Linux and Mac this can be done by adding the following line in the .bashrc file at the /home/ folder:
export PATH=/your/python/path:$PATH
And then source the .bashrc file.
source .bashrc
If you are on a operating system like cent os ,breaking the default python path can break your yum so be careful before changing it.
I am running a script in PyCharm and under the Project Interpretor I have the path
C:\envs\conda\keras2\python.exe
When I try to run the script via ssh on the server I get a 'no module named' error. I get
/usr/bin/python as the ans to 'which python' on the server itself. Could you tell me which path I must add for the script to run properly?
I am trying to use ipyparallel with a custom kernel I have installed in a conda env. My tools are built with matplotlib 2.0.2. I am running on a Jupyter Hub, with the default Python3 kernel pointing to matplotlib 1.5.3. I can see the version of matplotlib from the respective engines with this example:
import ipyparallel
import matplotlib
def myFunc(n):
import matplotlib
status = "mpl version=%s, and num=%d" % (matplotlib.__version__,
n * 10)
return status
rc=ipyparallel.Client(profile='MJBtest')
all_proc = rc[:]
all_proc.block=True
print("Local: ", matplotlib.__version__)
inlist = [i for i in range(3)]
print("Now calling map_sync")
result = all_proc.map_sync(myFunc, inlist)
print("Parallel result : ", result)
which returns
Local: 1.5.3
Now calling map_sync
Parallel result : ['mpl version=1.5.3, and num=0', 'mpl version=1.5.3, and num=10', 'mpl version=1.5.3, and num=20']
as I expect, because I am running in the Python 3 default kernel. I have built a customized kernel called "cetb3" by creating a custom kernel with the tools I want, activating it, and creating a kernelspec file with this command:
ipython kernel install --user --name cetb3
In the cetb3 environment, I can run python, import matplotlib and I see that the version is matplotlib 2.0.2. From this same cetb3 env, I also created a test profile with:
ipython profile create --parallel --profile=MJBtest
In the Jupyter Hub, I can switch the kernel to cetb3, import matplotlib and see that it is at v2.0.2. However, when I start a cluster from MJBtest, and try to run the same code as above with the cetb3 kernel, the cell hangs after the "Now calling map_sync" line and never returns:
Local: 2.0.2
Now calling map_sync
I thought that I might have to create an ipython profile that uses my custom kernel, and I tried adding the name of my profile to the cetb3 kernelspec file: "--profile=MJBtest" but when I did this, the kernel wouldn't even start. I am unclear whether I have to tell my kernel about my profile or vice-versa (and how I might do this) or if there is some other mechanism altogether for pushing my custom environment out to my ipyparallel engines.
So I worked with the sys admin on our supercomputer and it turns out that they had configured some customized ipython profiles that were starting the engine cluster using the ipengine command. In the ipcluster_config.py file, prior to the ipengine command, I was able to specify my custom environment by adding my conda env bin path to the beginning of the PATH env variable, and then calling source activate for the conda env I wanted to be available on each engine.
So, I have Anaconda, OSGeo and Python2.7 installed on my computer.
I'm also using Spyder. In Spyder :
>>> import sys
>>> sys.executable
'C:\\ProgramData\\Anaconda3\\pythonw.exe'
Which is what I want.
However, in the windows command line and powershell :
$ python3
>>> import sys
>>> sys.executable
'C:\\Progra~1\\OSGeo4W\\bin\\python3.exe'
Which is not what I want. I want to use 'C:\\ProgramData\\Anaconda3\\pythonw.exe' (or python.exe, not sure) when using python3 in the command line.
Also :
$ pip3
Fatal error in launcher: Unable to create process using '"'
I don't get why python3 in the windows command line points to OSGeo's version of Python3. Here is my path :
C:\Python27\;C:\Python27\Scripts;C:\ProgramData\Anaconda3;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Git\cmd;C:\Program Files\PuTTY\;C:\Progra~1\OSGeo4W\bin\;C:\Program Files\Microsoft\R Open\R-3.4.0\bin
I also have an environment variable called PYTHONHOME
C:\ProgramData\Anaconda3
Moreover (for completeness of information), I have python 2 installed :
$ python
File "C:\ProgramData\Anaconda3\lib\site.py", line 177
file=sys.stderr)
^
SyntaxError: invalid syntax
($ pip outputs the same thing).
Having python3 and python2.7 both work when using python3 and python (respectively) in the windows command line would be a nice bonus, but it's not really my priority.
There are probably several things you have to take care of:
In general the search order of the Windows PATH is from left to right starting with the system PATH. The first matching element wins. In your case this is correct because the system will search C:\ProgramData\Anaconda3\ first. However in that folder is no executable called python3 by default. On my system I created a simlink pointing to python.exe. On your system you can do it in PowerShell like this:
New-Item -Path C:\ProgramData\Anaconda3\python3.exe -ItemType SymbolicLink -Value C:\ProgramData\Anaconda3\python.exe
pip is located in Scripts\ folder so in your case you have to add C:\ProgramData\Anaconda3\Scripts to your PATH and create the corresponding simlinks again. In this case you have to create two of them because pip.exe is appending its name to the script that is trying to call (i.e. if your exe file is called foo.exe it will try to call foo-script.exe which does not exist) you can create the simlinks in PowerShell with those two commands:
New-Item -Path C:\ProgramData\Anaconda3\Scripts\pip3.exe -ItemType SymbolicLink -Value C:\ProgramData\Anaconda3\Scripts\pip.exe
and
New-Item -Path C:\ProgramData\Anaconda3\Scripts\pip3-script.py -ItemType SymbolicLink -Value C:\ProgramData\Anaconda3\Scripts\pip-script.py
Like this you will be able to use python3 and pip3 from your cmd line. Please check for similar problems with your python2 installation folder.
Hope it helps.