Two virtual environments for a single Django project? - python

I have a Django project for which I have created a virtual environment with the packages it requires.
Now for development, I was looking for a tool where I could try some code before including it to my project. I had a good experience with Jupyter a little while ago and I thought that would be nice to work with this tool again.
In order to avoid cluttering the minimal virtual environment with the dependencies of Jupyter, I duplicated it and installed jupyter alongside with django-extensions.
In my settings.py, I have:
if os.environ.get("VENV_NAME") == "jupyter-sandbox":
INSTALLED_APPS += ['django_extensions']
so that I can still be able to use the minimal virtual environment without django_extensions.
It works fairly well for now apart from the fact that I cannot run the server from my Jupyter-enabled virtual environment. This is because my project uses django-images and django can't find a migration file in this environment (in sites-packages/django_images/migrations). The error message is below:
raise NodeNotFoundError(self.error_message, self.key, origin=self.origin)
django.db.migrations.exceptions.NodeNotFoundError: Migration core.0001_initial dependencies reference nonexistent parent node ('django_images', '0002_auto_20170710_2103')
Would it be a good idea to create a symlink so that both virtual environments share the same django-images migrations folder or would it mess up completely my project?
I am not utterly confident with migrations yet and would appreciate some advice on this.

I think the confusion here is what you should do. In any standard project, you should have a hand-cultivated list of project dependencies. Most django users put that list into requirements.txt (usually recommended) or setup.py. You can always have multiple requirements: requirements-test.txt, requirements-dev.txt, etc. Use -r requirements.txt at the top of the other files to "import" other requirements:
# requirements.txt
django=1.11.3
and then...
# requirements-test.txt
-r requirements.txt
pytest
tox
and finally...
# requirements-dev.txt
-r requirements-test.txt
ipython
pdbpp
jupyter
Your goal is to have whatever you need for your project to run within the first file. It doesn't matter if your virtual environment has more than enough. And additionally, you should use something like tox to test out if your requirements.txt actually contains exactly what it needs. Hope that helps.

Related

What workflow can I use for moving from Poetry environment from dev to prod?

I've been coding small personal projects using conda for years. Recently I've been working on a higher volume of projects without the need for scientific packages, so I decided to switch over to standard Python. I am now using poetry as my package manager.
My development environment is working fine, and I'm liking poetry overall. But it's time to start rolling out my apps to several different machines. I'm letting some of my workmates use my apps. I don't know how to go about flowing my projects to them, as they are not developers.
Part of my app has a system_tray.py which loads on startup (windows), and is basically the launcher for all the additional *.py files that perform different functions. This is my issue that I need to solve during the small rollout.
system_tray.py
...
args = ['C:\\poetry\\Cache\\virtualenvs\\classroom-central-3HYMdsQi-py3.11\\Scripts\\python.exe',
'ixl_grade_input.py']
subprocess.call(args)
...
This obviously triggers the running of a separate *.py file. What I'm not sure of is what workflow would deal with this situation. How would I make this work in a development environment, and be able to roll this out into a production environment?
In my case, I can just manually modify everything and install Python on the individual machines along with pip install of the required modules.. but this can't be the way a real rollout would go, and I'm looking to learn a more efficient method.
I'm aware of, but have yet to use, a requirements.txt file. I first thought that perhaps you just setup virtualenvs to model after the production environment configurations. However, after trying to research this, it seems as though people roll their virtualenvs out into the production environments as well. I also understand that I can configure poetry config to install the venv into the project directory. Then just use a relative path in the "args"?
This is one workflow
Add launch scripts to your pyproject.toml
Get the source code, Python and Poetry on the target computer
Run poetry install to create a Poetry environment (virtual environment under the hood)
Run your launch script with poetry run command - they will automatically use the environment created with poetry install
Some examples how to add launch scripts to Poetry pyproject.toml
[tool.poetry.scripts]
trade-executor = 'tradeexecutor.cli.main:app'
get-latest-release = 'tradeexecutor.cli.latest_release:main'
prepare-docker-env = 'tradeexecutor.cli.prepare_docker_env:main'

Django Question - Should i start a new project in the same virtual environment?

Django newbie here.
I have an existing Django virtual environment that was setup as part of a tutorial,
using Django 2.2. and Vagrant + Virtual Box on my laptop.
There was 1 project created for the tutorial above.
I will now be starting another Django tutorial,
wherein I will be creating a few more projects.
WHat is the best way to go about this?
Should I create a new virtual environment for tutorial #2?
Or use the existing environment, that was setup for tutorial #1?
FYI - tutorial #2 uses Django 1.11
Thanks a lot!
It is always a good practice to create different virtual env for each django project. For example if you have multiple django projects that are using one virtualenv, and you want to host one of the django apps on a platform like Heroku which will require you create a requirements.txt file for python apps, so when you run pip freeze to get the requirements, you will find out that there are many packages in that env that is not required by your current project. And installing all those packages on your Heroku might make you run out of space before you know it. So try and keep the virtualenv different according to your project and keep the requirement.txt as well.
You have multiple options for package managers:
Pipenv
Poetry
Virtualenv
In general, Pipenv or Poetry are easier to use vs Virtualenv.
If you are still stuck, try using the Imagine smart compiler that handles your package installation as well as allow you to generate the code for your application logic.
Just do
npx imagine create -f Django -n myapp
make install && make run

Python Django project - What to include in code repository?

I'd like to know whether I should add below files to the code repository:
manage.py
requirements.txt
Also created the very core of the application that includes settings.py. Should that be added to the repo?
And the last one. After creating a project, a whole .idea/ folder was created. It was not included in the .gitignore file template, so how about that?
manage.py
This is part of the Django package, so you do not need to include it. Anyone who install's django will have that installed with it.
requirements.txt
This is how you actually how you tell people WHAT to install to run your project. So at the very least, for a Django project, you will want Django to be in that file. So yes, this file should be included in your git repo. Then anyone you pulls your code can simple run pip install -r requirements.txt and have the requirements installed.
settings.py
This is where things get slightly more into personal preference, but in general, yes it (or something like it) should be included. I like to follow the "Two Scoops of Django" pattern (https://startcodingnow.com/two-scoops-django-config/) of having different settings for different environments.
.idea/
This is actually IDE specific information. JetBrains has a sample file for what they recommend ignoring and what they think you should keep in that folder
(https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore), but I think it is far more common to just completely ignore that folder all together.

When working with a venv virtual environment, which files should I be commiting to my git repository?

Using GitHub's .gitignore, I was able to filter out some files and directories. However, there's a few things that left me a little bit confused:
GitHub's .gitignore did not include /bin and /share created by venv. I assumed they should be ignored by git, however, as the user is meant to build the virtual environment themselves.
Pip generated a pip-selfcheck.json file, which seemed mostly like clutter. I assume it usually does this, and I just haven't seen the file before because it's been placed with my global pip.
pyvenv.cfg is what I really can't make any sense of, though. On one hand, it specifies python version, which ought to be needed for others who want to use the project. On the other hand, it also specifies home = /usr/bin, which, while perhaps probably correct on a lot of Linux distributions, won't necessarily apply to all systems.
Are there any other files/directories I missed? Are there any stricter guidelines for how to structure a project and what to include?
Although venv is a very useful tool, you should not assume (unless you have good reason to do so) that everyone who looks at your repository uses it. Avoid committing any files used only by venv; these are not strictly necessary to be able to run your code and they are confusing to people who don't use venv.
The only configuration file you need to include in your repository is the requirements.txt file generated by pip freeze > requirements.txt which lists package dependencies. You can then add a note in your readme instructing users to install these dependencies with the command pip install -r requirements.txt. It would also be a good idea to specify the required version of Python in your readme.

Django : Which approach is better [ virtualenv + pip ] vs [manually carrying packages in svn]?

I have a django project that uses a lot of 3rd party apps, so wanted to decide out of the two approaches to manage my situation :
I can use [ virtualenv + pip ] along with pip freeze as requirements file to manage my project dependencies.
I don't have to worry about the apps, but can't have that committed with my code to svn.
I can have a lib folder in my svn structure and have my apps sit there and add that to sys.path
This way, my dependencies can be committed to svn, but I have to manage sys.path
Which way should I proceed ?
What are the pros and cons of each approach ?
Update:
Method1 Disadvantage : Difficult to work with appengine.
This has been unanswered question (at least to me) so far. There're some discussion on this recently:-
https://plus.google.com/u/0/104537541227697934010/posts/a4kJ9e1UUqE
Ian Bicking said this in the comment:-
I do think we can do something that incorporates both systems. I
posted a recipe for handling this earlier, for instance (I suppose I
should clean it up and repost it). You can handle libraries in a very
similar way in Python, while still using the tools we have to manage
those libraries. You can do that, but it's not at all obvious how to
do that, so people tend to rely on things like reinstalling packages
at deploy time.
http://tarekziade.wordpress.com/2012/02/10/defining-a-wsgi-app-deployment-standard/
The first approach seem the most common among python devs. When I first started doing development in Django, it feels a bit weird since when doing PHP, it quite common to check third party lib into the project repo but as Ian Bicking said in the linked post, PHP style deployment leaves out thing such non-portable library. You don't want to package things such as mysqldb or PIL into your project which better being handled by tools like Pip or distribute.
So this is what I'm using currently.
All projects will have virtualenv directory at the project root. We name it as .env and ignore it in vcs. The first thing dev did when to start doing development is to initialize this virtualenv and install all requirements specified in requirements.txt file. I prefer having virtualenv inside project dir so that it obvious to developer rather than having it in some other place such as $HOME/.virtualenv and then doing source $HOME/virtualenv/project_name/bin/activate to activate the environment. Instead developer interact with the virtualenv by invoking the env executable directly from project root, such as:-
.env/bin/python
.env/bin/python manage.py runserver
To deploy, we have a fabric script that will first export our project directory together with the .env directory into a tarball, then copy the tarball to live server, untar it deployment dir and do some other tasks like restarting the server etc. When we untar the tarball on live server, the fabric script make sure to run virtualenv once again so that all the shebang path in .env/bin get fixed. This mean we don't have to reinstall dependencies again on live server. The fabric workflow for deployment will look like:-
fab create_release:1.1 # create release-1.1.tar.gz
fab deploy:1.1 # copy release-1.1.tar.gz to live server and do the deployment tasks
fab deploy:1.1,reset_env=1 # same as above but recreate virtualenv and re-install all dependencies
fab deploy:1.1,update_pkg=1 # only reinstall deps but do not destroy previous virtualenv like above
We also do not install project src into virtualenv using setup.py but instead add path to it to sys.path. So when deploying under mod_wsgi, we have to specify 2 paths in our vhost config for mod_wsgi, something like:-
WSGIDaemonProcess project1 user=joe group=joe processes=1 threads=25 python-path=/path/to/project1/.env/lib/python2.6/site-packages:/path/to/project1/src
In short:
We still use pip+virtualenv to manage dependencies.
We don't have to reinstall requirements when deploying.
We have to maintain path into sys.path a bit.
Virtualenv and pip are fantastic for working on multiple django projects on one machine. However, if you only have one project that you are editing, it is not necessary to use virtualenv.

Categories

Resources