I have a python script that I am trying to to run daily using Windows Task Scheduler. Based on some SO threads, I wrote a batch script and created the scheduler task. I am using Anaconda as virtual environment and version is 4.5.11. When I run the python script from "Anaconda Prompt" or "Windows CMD" it works just fine.
However, when I use the batch script to execute my python script from "Windows CMD" it fails to find my intended conda environment.
Here is my batch script,
set original_dir=%CD%
set conda_root_dir=C:\Anaconda3\Scripts
call %conda_root_dir%\activate.bat
cd C:\Anaconda3\Scripts
call activate yttv_crawler
python "C:\__ Work Station\Py_Projects\YT_TV_Crawler\index.py"
call deactivate
cd %original_dir%
exit /B 1
After running the batch script manually from CMD for test purposes, I am getting this-
C:\Users\aafaysal\Desktop>yt_tv_crawler.bat
C:\Users\aafaysal\Desktop>set original_dir=C:\Users\aafaysal\Desktop
C:\Users\aafaysal\Desktop>set venv_root_dir=C:\Anaconda3\envs\yttv_crawler
C:\Users\aafaysal\Desktop>set conda_root_dir=C:\Anaconda3\Scripts
C:\Users\aafaysal\Desktop>call C:\Anaconda3\Scripts\activate.bat
(base) C:\Users\aafaysal\Desktop>cd C:\Anaconda3\Scripts
(base) C:\Anaconda3\Scripts>call activate yttv_crawler
Could not find conda environment: yttv_crawler
You can list all discoverable environments with `conda info --envs`.
(base) C:\Anaconda3\Scripts>python "C:\__ Work
Station\Py_Projects\YT_TV_Crawler\index.py"
Traceback (most recent call last):
File "C:\__ Work Station\Py_Projects\YT_TV_Crawler\index.py", line 1, in
<module>
from data_access_layer import save_videos
File "C:\__ Work Station\Py_Projects\YT_TV_Crawler\data_access_layer.py",
line 1,
in <module> import pymysql
ModuleNotFoundError: No module named 'pymysql'
(base) C:\Anaconda3\Scripts>call deactivate
C:\Anaconda3\Scripts>cd C:\Users\aafaysal\Desktop
C:\Users\aafaysal\Desktop>exit /B 1
But my conda environment does exists.
(base) C:\Users\aafaysal>conda --version
conda 4.5.11
(base) C:\Users\aafaysal>conda info --envs
# conda environments:
#
base * C:\Anaconda3
iqtools C:\Anaconda3\envs\iqtools
yttracker C:\Anaconda3\envs\yttracker
yttv_crawler C:\Anaconda3\envs\yttv_crawler
Here is the list of System Path Environment variables.
Please help me out.
NB: I know there are a lot of similar questions out there and I have nearly checked/tried everything and still have not been able to fix this.
Related
I'm running Anaconda on Ubuntu 20.04 via DigitalOcean droplet. I've created a new project folder separate from /home and under that project folder, I'm running into the following issues:
the env gst is not being pulled from the .yaml file after I run the below line:
conda env create -n gst --file build/conda/conda-3-8-env.yaml
I can find the file and even attempt to activate it via bash command, I receive the following errors:
/home/josh/anaconda3/bin/conda: line 3: import: command not found
/home/josh/anaconda3/bin/conda: conda: line 6: syntax error near unexpected token `sys.argv'
/home/josh/anaconda3/bin/conda: conda: line 6: `if len(sys.argv) > 1 and sys.argv[1].startswith('shell.') and sys.path and sys.path[0] == '':'
Unsure of how to proceed from here. Is it because I'm working in a new project folder? Do I need to edit .condarc at all?
I've created a pipenv environment and now I'm creating a shell script called bootstrap.sh to activate it and run a flask server, but the following line is producing an error
bootstrap.sh reads as follows
#!/bin/sh
...
source "$(pipenv --venv)\\Scripts\\activate"
...
Running simply pipenv --venv returns C:\Users\johng\.virtualenvs\cashman-flask-project-2vwc8e6G
But executing the shell script returns a No such file or directory error
$ source bootstrap.sh
/Scripts/activate: No such file or directoryask-project-2vwc8e6G
This is in the vsCode bash terminal on Windows 10. The file certainly exists and can be navigated to, so what is causing this?
I have the following conda list:
➜ ~ conda env list
# conda environments:
#
base * /home/ubuntu/anaconda3
amazonei_mxnet_p36 /home/ubuntu/anaconda3/envs/amazonei_mxnet_p36
aws_neuron_mxnet_p36 /home/ubuntu/anaconda3/envs/aws_neuron_mxnet_p36
aws_neuron_pytorch_p36 /home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36
aws_neuron_tensorflow_p36 /home/ubuntu/anaconda3/envs/aws_neuron_tensorflow_p36
mxnet_latest_p37 /home/ubuntu/anaconda3/envs/mxnet_latest_p37
mxnet_p36 /home/ubuntu/anaconda3/envs/mxnet_p36
python3 /home/ubuntu/anaconda3/envs/python3
pytorch_latest_p37 /home/ubuntu/anaconda3/envs/pytorch_latest_p37
pytorch_p37 /home/ubuntu/anaconda3/envs/pytorch_p37
tensorflow2_latest_p37 /home/ubuntu/anaconda3/envs/tensorflow2_latest_p37
tensorflow2_p37 /home/ubuntu/anaconda3/envs/tensorflow2_p37
tensorflow_p37 /home/ubuntu/anaconda3/envs/tensorflow_p37
I want to enable my environment to be set to tensorflow2_latest_p37
whenever I login to my AWS account. How can I achieve that?
I tried putting source activate ensorflow2_latest_p37 in .zshrc. But it gave me:
this error message
/home/ubuntu/.zshrc:source:7: no such file or directory: activate
To change the activated environment permanently, there is no method except creating a startup script that runs the conda activate command.
In ubuntu, in .bashrc file, added the following command:
conda activate <env>
and then the specified env is getting activated whenever I open a terminal.
Edited
I created an ec2 instance in aws and used zsh shell. Inserting the bellow 2 lines in .zshrc worked for me:
source ~/anaconda3/bin/activate
conda activate <env_name>
Now when I login to my ec2 instance i get my env activated. I hope this will work for you too.
In the user site .pth files are processed once at interpreter startup:
$ echo 'import sys; sys.stdout.write("hello world\n")' > ~/.local/lib/python3.8/site-packages/hello.pth
$ python3.8 -c ""
hello world
And it's the same behaviour in the system site, e.g. /usr/local/lib/python3.8/site-packages/.
But in venv they are processed twice:
$ rm ~/.local/lib/python3.8/site-packages/hello.pth
$ /usr/local/bin/python3.8 -m venv .venv
$ source .venv/bin/activate
(.venv) $ echo 'import sys; sys.stdout.write("hello world\n")' > .venv/lib/python3.8/site-packages/hello.pth
(.venv) $ python -c ""
hello world
hello world
Why are path configuration files processed twice in a virtual environment?
Looks like it all happens in the site module (not so surprising). In particular in the site.main() function.
The loading of the .pth files happens either in site.addsitepackages() or in site.addusersitepackages(), depending in which folder the file is placed. Well more precisely both these function call site.addpackage(), where it actually happens.
In your first example, outside a virtual environment, the file is placed in the directory for user site packages. So the console output happens when site.main() calls site.addusersitepackages().
In the second example, within a virtual environment, the file is placed in the virtual environment's own site packages directory. So the console output happens when site.main() calls site.addsitepackages() directly and also via site.venv() a couple of lines earlier that also calls site.addsitepackages() if it detects that the interpreter is running inside a virtual environment, i.e. if it finds a pyvenv.cfg file.
So in short: inside a virtual environment site.addsitepackages() runs twice.
As to what the intention is for this behavior, there is a note:
# Doing this here ensures venv takes precedence over user-site
addsitepackages(known_paths, [sys.prefix])
Which, from what I can tell, matters in case the virtual environment has been configured to allow system site packages.
Maybe it could have been solved differently so that the path configuration files are not loaded twice.
Trying to send some output to Slack using cron on an instance of GCP Compute Engine running Linux Ubuntu 18.04.2 LTS.
Output is generated by python script.
Python script is usually run using conda activate my_env and python my_script.py
I have made the bash file executable by doing chmod +x my_script.bash
Here is content of bash file:
#!/bin/bash
source /home/user/miniconda3/bin/activate
conda activate my_env
python /home/user/folder/cron/reports.py -r check_stocks
I would expect adding the following line to crontab -e:
00 21 * * * cd /home/user/folder/cron/ && /bin/bash my_script.bash would give me the same results.
I run cd /home/user/folder/cron/ && /bin/bash my_script.bash in my shell and the script runs fine.
Make your .py file exacutable too (chmod +x file.py) - otherwise it won't work.
You can find similar issue resolved here.
I had a similar issue and I gave up activating the conda environment and instead called directly the python bin in the miniconda environment folder, like this:
00 21 * * * /home/myuser/miniconda3/envs/my_env/bin/python /home/user/folder/cron/reports.py
I don't think this is the recommended solution, especially if you have a complex project with dependencies and you are importing a lot of modules, but for my simple script it solved the problem.