I am using django-crontab to run some cron jobs as part of my project. I have a virtual environment setup for this particular project.
So after activating the environment, I add the jobs by using the following command :
python manage.py crontab add
I see that my jobs are succcessfully added to the OS crontab, however when I see the logs, I found that it was not able to find certain modules(read all) that were installed in virtual environment.
However if I run those crons manually by passing the hash to the run command it runs successfully.
On further inspection I found when the crons are added to the crontab, the python binaries are pointed to the global(system level binaries) instead of the virtual level binaries.
The only solution I can think of is to run pip install at a system level but that will mess up the sanbox environment I intend to create.
Any ideas?
django-crontab is not maintained anymore. Last changes to this library happened over 2 years ago. I really advise stopping using it.
For fixing that bug, you can use either CRONTAB_PYTHON_EXECUTABLE setting to point to python executable from your env, or CRONTAB_COMMAND_PREFIX to add something that will just activate this virtualenv before running python.
Related
I am new to VS Code. I have created a virtual environment and selected that as the interpreter, in which I have run
pip install -r requirements
from the terminal which allowed for pandas to be installed in that virtual environment (I believe based on the docs
From that directory in the terminal, I ran
python myscript.py
It works. Generates the output as it should.
I then try to run the same script in the window above and I get the error, no module named pandas. I have not changed the interpreter.
UPDATE
When closing the terminal and then attempting to run the code from the explorer/deug window, I received a failure to run scripts notice. I therefore (based on this post changed my Execution Policies in Windows Shell (separate, not in VS terminal), and now it at least goes into the environment, however I still get the same error, pandas unknown.
UPDATE 2
I noticed in the tutorial that they created the directory and then created the environment within that. I moved my code to be under the virtual environment I created and then it works, both from the terminal and from the debug/explorer window. However here I noticed that the intrepetor is not characterised as a virtual environment but instead looks like the standard version- now there is no option for me to select working_env as an interpretor.
Running pip freeze shows the latest version of pandas which is not what I have installed elsewhere for instance. So this works, but I am now not sure if this is really how I should be using virtual environments.
**UPDATE 3
For whatever reason I can now update the interpreter. When I do however I am back to where I was. I can run the program in the terminal with the interpreteter selected, but running from the debug window I get the error that pandas is missing. Trying to install pandas I am told the requirement is satisfied. Running pip freeze confirms that this is installed. I have the feeling I am somehow installing pandas not in the environment I want
What have I missed in my vscode setup?
Per both the comments, I had a couple of issues here.
For whatever reason during update 2, I couldn't see the interpreter when I was in the folder initially but it was there. It is noted in the docs that this can take a while. Regardless, it must show in the bottom left that you are in the interpreter
Using powershell, I needed to change the execution policy to be unrestricted.
Set-ExecutionPolicy Unrestricted -Scope CurrentUser
Then I hadn't installed pandas in the virtual environment AND with the interpreter. Therefore
. ..\Scripts\activate.ps1
activated the working environment
python -m pip install -r requirements.txt
installed my packages
Running my script was achieved by
python .\myscript.py
and that also runs with the play button (though here I had to press Ctrl+Sft+P type Terminal: Select Default Shell, ensure the terminal was powershell & restart)
I'm teaching a beginners python class, the environment is Anaconda, VS Code and git (plus a few extras from a requirements.txt).
For the windows students this runs perfectly, however the mac students have an existing python (2.7) to contend with.
The windows students (i.e. they have a windows computer), their environment when they debug matches their console environment. However the mac students seem to be locked to their 2.7 environment.
I've tried aliasing, as suggested here and here
alias python2='python'
alias python='python3'
alias pip2='pip'
alias pip='pip3'
I've modified the .bash_profile file
echo 'export PATH="/Users/$USER/anaconda3/bin:$PATH"' >>.bash_profile
Both of these seem to work perfectly to modify their Terminal environments, when launched externally to VS Code. Neither seem to do anything to the environment launched from [cmd]+[`].
I've also tried conda activate base in the terminal, which seems to have no effect on a python --version or a which python
They can run things using python 3, but that means that they need to remember that they are different to the other 2/3 of the students. It's othering for them, and more work for me!
The students are doing fine, launching things from their external terminal, but it would streamline things greatly if the environments could be as consistent as possible across the OSs.
Whilst they are complete beginners, they can run a shell script. They currently have one that installs pip requirements and vs code extensions.
Is there a configuration that will keep the terminal in line with the debug env?
In my opinion the best practice is to create Python virtual environments (personally I love using conda environments, especially on Mac where you stuck with unremovable old Python version). Then VSCode will automatically (after installing very powerful Python extension) find all your virtual environments. This way you will teach your students a good practice of handling Python zoo a.k.a. package incompatibilities. Terminal environments settings will be consistent with VSCode, without being dependent on unneeded any more aliases. Obviously, virtual environments are OS independent, so you will be more consistent and remove unnecessary confusion between different students.
The additional bonus of the virtenvs is that you can create one exactly according to your requirements.txt and switch from one to another with a single click (in terminal it takes two commands: deactivate -> activate).
You can read more about how to handle Python virtual environments on VSCode site
Given the aliases are run just once and are not persistent in .bash_profile, python targets the default interpreter rather than the expected conda python3 interpreter.
Try to symlink conda's python3 executable to capture the python namespace
ln -sf /Users/$USER/anaconda3/bin/python3 /Users/$USER/anaconda3/bin/python
This will create or update the symlink. Use the same approach for pip and pip3.
Python in vscode let's you select which interpreter will be used to run the scripts.
It is in settings under "python.pythonPath", just set it to point to the interpreter of choice.
It can be set on a project basis as well (which is how you ensure that a project that has a virtual environment will execute using that interpreter and packages), you just select Workspace in the settings pane and add the desired python interpreter there.
been searching for this with no success, i don't know if i am missing something but i have a virtualenv already but how do i create a project to associate the virtualenv with, thanks
P.S. Am on windows
I could be wrong here, but I do not believe that a virtualenv is something that is by its very nature something that you associate with a project. When you use a virtualenv, you're basically saying, "I'm taking this Python interpreter, installing what I want on it, and setting it aside from the Python interpreter that the entire computer uses by default." Virtualenv does not have a concept of a Python "project"; it is just a custom version of a Python interpreter that you run code through. There are tools in IDEs like PyCharm that enable you to associate a project with a virtualenv, but those are another layer on top of the base software.
In order to use a virtualenv with a project, you will need to "activate" it every time you wish to use it. The documentation for activating a virtualenv on Windows is found here.
EDIT:
Saw that you had virtualenvwrapper tagged in your post, so I did a bit of hunting on that. It would appear that there is the mkproject command, which creates a project folder and then associates it with a virtualenv interpreter. Documentation on it can be found here.
Requirements:
Virtual Env
Pycharm
Go to Virtual env and type which python
Add remote project interpreter (File > Default Settings > Project Interpreter (cog) add remote)
You'll need to set up your file system so that PyCharm can also open the project.
NOTE:
do not turn off your virtual environment without saving your run configurations that will cause pycharm to see your run configurations as corrupt
There's a button on the top right that reads share enable this and your run configs will be saved to a .idea file and you'll have a lot less issues
If you already have your virtualenv installed you just need to start using it.
Create your projects virtual environment using virtualenv env_name on cmd. To associate a specific version of python with your environment use: virtualenv env_name -p pythonx.x;
Activate your environment by navigating into its Scripts folder and executing activate.
Your terminal now is using your virtual environment, that means every python package you install and the python version you run will be the ones you configured inside your env.
I like to create environments with the names similar to my projects, I always use one environment to each project, that helps keeping track of which packages my specific projects need to run.
If you haven't read much about venvs yet, try googling about requirements.txt along with pip freeze command those are pretty useful to keep track of your project's packages.
I like Pipenv: Python Dev Workflow for Humans to manage environments:
Pipenv is a tool that aims to bring the best of all packaging worlds (bundler, composer, npm, cargo, yarn, etc.) to the Python world. Windows is a first-class citizen, in our world.
It automatically creates and manages a virtualenv for your projects, as well as adds/removes packages from your Pipfile as you install/uninstall packages. It also generates the ever-important Pipfile.lock, which is used to produce deterministic builds.
Pipenv is primarily meant to provide users and developers of applications with an easy method to setup a working environment.
I'm new to django and currently going through the main tutorial. Even though it was working earlier, when I do python manage.py runserver OR python manage.py -h OR with any other command, the shell doesn't output anything. Wondering what I'm doing wrong.
The problem is that the first line in manage.py breaks the file on windows.
The first line should look like this:
#!/usr/bin/env python
Removing it will fix the issue.
First, check if python is fully installed by typing "python" in a shell.
Then you should try python manage.py runserver inside your django project. If you don't have any django project, try creating one by typing django-admin.py startproject mysite. If nothing is displayed in your shell, you must have installed Django the wrong way.
Please refer to Django Documentation at https://docs.djangoproject.com/en/1.4/intro/install/
If you had your server running till one point and a certain action/change broke it, try going back to the previous state.
In my case there was an email trigger which would put the system in an invalid state if email doesn't go through. Doing git stash followed by selectively popping the stash and trying the runserver helps narrow down the problem to a particular file in your project.
Please try this.
Uninstall Python.
Go inside C drive and search Django. You will get many Django related files.
Delete every Django file. đ don't delete your Django files.
Install Python.
It's worked for me.
if you created a virtual environment then activate it. you can try this command(in virtual environment directory) if you're using windows os:
.\Scripts\activate
On Ubuntu works for my by running manage.py as script:
./manage.py runserver
Just stuck with the same problem. Found a solution that works, but tedious.
You need to know the location of the python.exe file in your computer. It is usually
C:/Users/USERNAME/AppData/Local/Programs/Python//python.exe
Modify as required and run the following in CMD,
C:/Users/USER1/AppData/Local/Programs/Python/Python38-32/python.exe
F:/mysite/manage.py runserver
Hope this works :)
if you are using Redis Server on Windows, check it out if Redis Server is running, I had same problem and realized my Redis Server was not running, I ran it and now my manage.py commands work fine.
The same happened with me also, but this issue is a minor one as it happens if you have multiple versions of Python on your system, you can select a specific Python version by running python3 or whichever version you want.
SO you should start from the beginning, uninstall Django first then,
create a virtual environment, decide upon a directory where you want to place it, and run the venv module as a script with the directory path:
for e.g:
python3 -m venv tutorial-env
//This will create the tutorial-env directory if it doesnât exist, and also create directories inside it
Once youâve created a virtual environment, you may activate it.
On Windows, run:
tutorial-env\Scripts\activate.bat
On Unix or MacOS, run:
source tutorial-env/bin/activate
Now,
In the command prompt, ensure your virtual environment is active, and execute the following command:
...> py -m pip install Django
NOTE:
If django-admin only displays the help text no matter what arguments it is given, there is probably a problem with the file association in Windows. Check if there is more than one environment variable set for running Python scripts in PATH. This usually occurs when there is more than one Python version installed.
Another solution, if you can, is to upgrade Django
pip install django --upgrade
Oftentimes one will get other unrelated issues to solve that are linked with the upgrade but once all is fixed the server should run just fine.
If you can't upgrade Django, this problem also happens when the code was built using Python 2.x and you're locally using Python 3.x.
The quicker fix in that case is to uninstall Python 3.x from your machine and make sure Python 2.x was added to the path. I've seen some developers setting up alias in PowerShell to have more than one version in the environment too.
I think the problem is in manage.py file (50%), check it with an another file that is correct.
I have already created a virtualenv for running my python script.
Now when I integrate this python scrip with Jenkins, I have found at the time execution Jenkins is using wrong python environment.
How I can ensure Jenkins is using the correct virtualenv?
As an example, for my case I want to use virtualenv test. How I can use this pre-prepared virtualenv to run my python script.
source test/bin/activate
You should install one of python plugins. I've used ShiningPanda. Then you'll be able to create separate virtual environment configurations in Manage Jenkins > Configure System > Python > Python installation.
In job configuration there will be Python Builder step, where you can select python environment.
Just make sure you're not starting Jenkins service from within existing python virtual environment.
First, you should avoid using ShiningPanda because is broken. It will fail if you try to run jobs in parallel and is also not compatible with Jenkins2 pipelines.
When builds are run in parallel (concurrent) Jenkins will append #2,#3... to the workspace directory so two executions will not share the same folder. Jenkins does clone the original worksspace so do not be surprised if it will contain a virtualenv you created in a previous build.
You need to take care of the virtualenv creation yourself but you have to be very careful about how you use it:
workspaces folder may not be cleanup and its location could change from one build to another
virtualenvs are knows to get broken when they are moved, and jenkins moves them.
creating files outside workspace is a really bad CI practice, avoid the temptation to use /tmp
So your only safe option is to create an unique virtual environment folder for each build inside the workspace. You can easily do this by using the $JOB_NUMBER environment variable.
This will be different even if you have jobs running in parallel. Also this will not repeat.
Downsides:
speed: virtualenvs are not reused between builds so they are fully recreated. If you use --site-packages you may speedup the creation considerably (if the heavy packets are already installed on the system)
space: if the workspace is not cleaned regularly, the number of virtualenvs will grow. Workaround: have a job that cleans workspaces every week or every two weeks. This is also a good practice for spotting other errors. Some people choose to clean workspace for each execution.
Shell snippet
#/bin/bash
set -euox pipefail
# Get an unique venv folder to using *inside* workspace
VENV=".venv-$BUILD_NUMBER"
# Initialize new venv
virtualenv "$VENV"
# Update pip
PS1="${PS1:-}" source "$VENV/bin/activate"
# <YOUR CODE HERE>
The first line is implementing bash string mode, more details at http://redsymbol.net/articles/unofficial-bash-strict-mode/
You can use Pyenv Pipeline Plugin. The usage is very easy, just add
stage('my_stage'){
steps{
script{
withPythonEnv('path_to_venv/bin'){
sh("python foo.py")
...
You can add pip install whatever in your steps to update any virtual environment you are using.
By default it will look for the virtual environment in the jenkins workspace and if it does not find it, it will create a new one.