Have Django use Mutt as its email backend? - python

I know that you can use the EMAIL_BACKEND setting, and I think I have written a working mutt backend, but I can't set my EMAIL_BACKEND to my class because it apparently has to be the string import path, not the name of the class. The local path (emails) doesn't work because the current directory apparently isn't in the Python import path. And I can't use local package imports (from . import) because, of course, it has to be a simple string.
I got it working by copying my module into /usr/local/lib/python3.7/, but that's such a terrible long-term solution that it isn't even worth it.
My project directory structure is like: django/project/app/, with emails.py under app/, alongside settings.py and the others. The project/app structure didn't make a lot of sense to me (I only have one app) but I got the impression that it was the intended way to setup Django, so I did that.
It shouldn't be relevant, but BTW my mutt backend code is:
import subprocess
from django.core.mail.backends.base import BaseEmailBackend
class MuttBackend(BaseEmailBackend):
def send_messages(self, email_messages):
for m in email_messages: self.send(m)
def send(self, message):
print(message.subject, message.from_email, message.to, message.body)
mutt = subprocess.Popen(args = ['/usr/local/bin/mutt', *message.to,
'-s', message.subject,
'-e', f'set from="{message.from_email}"'],
stdin = subprocess.PIPE)
mutt.stdin.write(bytes(message.body, 'utf-8'))
mutt.stdin.close()
How can I set EMAIL_BACKEND to a class without using its import path, or find another workaround? I did some googling but couldn't find anyone else who had gotten anything like this to work.

I figured it out. The default config assumed uWSGI was running in project/, not project/app/, so the import path I needed was app.emails.MuttBackend.

Related

Issues with module imports going from python 2 to python 3

I am trying to upgrade a 10 year old event listener that I didn't write from Python 2.7 to python 3.7. The basic issue I'm running into is the way the original script was importing its plugins. The idea behind the original script was that any python file put into a "plugins" folder, with a "registerCallbacks" function inside it would auto-load itself into the event listener and run. It's been working great for lots of studios for years, but Python 3.7 is not liking it at all.
The folder structure for the original code is as follows:
EventListenerPackage
src
event_listener.py
plugins
plugin_1.py
plugin_2.py
From this, you can see that both the event listener and the plugins are held in folders that are parallel to each other, not nested.
The original code read like this:
# Python 2.7 implementation
import imp
class Plugin(object):
def __init__(self, path):
self._path = 'c:/full/path/to/EventListenerPackage/plugins/plugin_1.py'
self._pluginName = 'plugin_1'
def load(self):
try:
plugin = imp.load_source(self._pluginName, self._path)
except:
self._active = False
self.logger.error('Could not load the plugin at %s.\n\n%s', self._path, traceback.format_exc())
return
regFunc = getattr(plugin, 'registerCallbacks', None)
Due to the nature of the changes (as I understand them) in the way that Python 3 imports modules, none of the other message boards seem to be getting me to the answer.
I have tried several different approaches, the best so far being:
How to import a module given the full path?
I've tried several different methods, including adding the full path to the sys.path, but I always get "ModuleNotFoundError".
Here is roughly where I'm at now.
import importlib.util
import importlib.abc
import importlib
class Plugin(object):
def __init__(self, path):
self._path = 'c:/full/path/to/EventListenerPackage/plugins/plugin_1.py'
self._pluginName = 'plugin_1'
def load(self):
try:
spec = importlib.util.spec_from_file_location('plugins.%s' % self._pluginName, self._path)
plugin = importlib.util.module_from_spec(spec)
# OR I HAVE ALSO TRIED
plugin = importlib.import_module(self._path)
except:
self._active = False
self.logger.error('Could not load the plugin at %s.\n\n%s', self._path, traceback.format_exc())
return
regFunc = getattr(plugin, 'registerCallbacks', None)
Does anyone have any insights into how I can actually import these modules with the given folder structure?
Thanks in advance.
You're treating plugins like it's a package. It's not. It's just a folder you happen to have your plugin source code in.
You need to stop putting plugins. in front of the module name argument in spec_from_file_location:
spec = importlib.util.spec_from_file_location(self._pluginName, self._path)
Aside from that, you're also missing the part that actually executes the module's code:
spec.loader.exec_module(plugin)
Depending on how you want your plugin system to interact with regular modules, you could alternatively just stick the plugin directory onto the import path:
sys.path.append(plugin_directory)
and then import your plugins with import or importlib.import_module. Probably importlib.import_module, since it sounds like the plugin loader won't know plugin names in advance:
plugin = importlib.import_module(plugin_name)
If you do this, plugins will be treated as ordinary modules, with consequences like not being able to safely pick a plugin name that collides with an installed module.
As an entirely separate issue, it's pretty weird that your Plugin class completely ignores its path argument.

ValueError: Attempted relative import in non-package for running standalone scripts In Flask Web App

I have flask web app and its structure is as follows:
/app
/__init__.py
/wsgi.py
/app
/__init__.py
/views.py
/models.py
/method.py
/common.py
/db_client.py
/amqp_client.py
/cron
/__init.py__
/daemon1.py
/daemon2.py
/static/
/main.css
/templates/
/base.html
/scripts
/nginx
/supervisor
/Dockerfile
/docker-compose.yml
In app/app/cron i have written standalone daemons which i want to call outside the docker. e.g.
python daemon1.py
daemon1.py code
from ..common import stats
from ..method import msapi, dataformater
from ..db_client import db_connection
def run_daemon():
......
......
......
if name =="main":
run_daemon()
So when i am trying to run this daemon1.py its throwing ValueError: Attempted relative import in non-package
Please suggest the right approach for import as well as to structure these daemons.
Thanks in advance.
I ran into the exact same problem with an app that was running Flask and Celery. I spent far too many hours Googling for what should be an easy answer. Alas, there was not.
I did not like the "python -m" syntax, as that was not terribly practical for calling functions within running code. And on account of of my seemingly small brain, I was not able to come to grips with any of the other answers out there.
So...there is the wrong way and the long way. Both of them work (for me) and I'm sure I'll get a tongue lashing from the community.
The Wrong Way
You can call a module directly using the imp package like so:
import imp
common = imp.load_source('common', os.path.dirname(os.path.abspath('__file__')) + '/common.py')
result = common.stats() #not sure how you call stats, but you hopefully get the idea
I had a quick search for the references that said that is a no-no, but I can't find them...sorry.
The Long Way
This method involves temporarily appending each of your modules to you PATH. This has worked for me on my Docker deploys and works nicely regardless of the container's directory structure. Here are the steps:
1) You must import the relevant modules from the parent directories in your __init__ files. This is really the whole point of the __init__ - allowing the modules in its package to be callable. So, in your case, cron/__init__ should contain:
from . import common
It doesn't look like your directories go any higher than that, but you would do the same for any other packages levels up as well.
2) Now you need to append the path of the module to the PATH variable. You can see what is in there right now by running:
sys.path
As expected, you probably won't see any of your modules in there. That means, that Python can't figure out what you want when you call the common module. In order to add the path, you need to figure out the directory structure. You will want to make this dynamic to account for changing directories.
It's worth noting that this will need to run each time your module runs. I'm not sure what your cron module is, but in my case it is Celery. So, this runs only when I fire up workers and the initial crontabs.
Here is the hack I threw together (I'm sure there is a cleaner way to do it):
curr_path = os.getcwd() #current path where cron is running
parrent_path = os.path.abspath(os.path.join(os.getcwd(), '..')) #the parent directory path
parrent_dir = os.path.basename(os.path.abspath(parrent_path)) #the parent directory name
while parrent_dir <> 'project_name': #loop until you get to the top directory - should be the project name
parrent_path = os.path.abspath(os.path.join(par_path, '..'))
parrent_dir = os.path.basename(os.path.abspath(parrent_path))
In your case, this might be a challenge since you have two directories named 'app'. Your top level 'app' is my 'project_name'. For the next step, let's assume you have changed it to 'project_name'.
3) Now you can append the path for each of your modules to the PATH variable:
sys.path.append(parrent_dir + '/app')
Now if you run sys.path again, you should see the path to /app in there.
In summary: make sure all of your __init__'s have imports, determine the paths to the modules you want to import, append the paths to the PATH variable.
I hope that helps.
#greenbergé Thank you for your solution. i tried but didn't worked for me.
So to make things work I have changed my code little bit. Apart from calling run_daemon() in main of daemon1.py, i have called function run_daemon() directly.
python -m 'from app.cron.daemon1 import run_daemon(); run_daemon()'
As it is not exact solution of the problem but things worked for me.

How to get the location of a Zope installation from inside an instance?

We are working on an add-on that writes to a log file and we need to figure out where the default var/log directory is located (the value of the ${buildout:directory} variable).
Is there an easy way to accomplish this?
In the past I had a similar use case.
I solved it by declaring the path inside the zope.conf:
zope-conf-additional +=
<product-config pd.prenotazioni>
logfile ${buildout:directory}/var/log/prenotazioni.log
</product-config>
See the README of this product:
https://github.com/PloneGov-IT/pd.prenotazioni/
This zope configuration can then be interpreted with this code:
from App.config import getConfiguration
product_config = getattr(getConfiguration(), 'product_config', {})
config = product_config.get('pd.prenotazioni', {})
logfile = config.get('logfile')
See the full example
here: https://github.com/PloneGov-IT/pd.prenotazioni/blob/9a32dc6d2863b5bfb5843d441e652101406d9a2c/pd/prenotazioni/init.py#L17
Worth noting is the fact that the initial return avoids multiple logging if the init function is mistakenly called more than once.
Anyway, if you do not want to play with buildout and custom zope configuration, you may want to get the default event log location.
It is specified in the zope.conf. You should have something like this:
<eventlog>
level INFO
<logfile>
path /path/to/plone/var/log/instance.log
level INFO
</logfile>
</eventlog>
I was able to obtain the path with this code:
from App.config import getConfiguration
import os
eventlog = getConfiguration().eventlog
logpath = eventlog.handler_factories[0].instance.baseFilename
logfolder = os.path.split(logpath)[0]
Probably looking at in the App module code you will find a more straightforward way of getting this value.
Another possible (IMHO weaker) solution would be store (through buildout or your prefered method) the logfile path into an environment variable.
You could let buildout set it in parts/instance/etc/zope.conf in an environment variable:
[instance]
recipe = plone.recipe.zope2instance
environment-vars =
BUILDOUT_DIRECTORY ${buildout:directory}
Check it in Python code with:
import os
buildout_directory = os.environ.get('BUILDOUT_DIRECTORY', '')
By default you already have the INSTANCE_HOME environment variable, which might be enough.

Automatically delete MEDIA_ROOT between tests

I was wondering if it were possible, and preferably not too difficult, to use Django DiscoverRunner to delete my media directory between every test, including once at the very beginning and once at the very end. I was particularly interested in the new attributes "test_suite" and "test_runner" that were introduced in Django 1.7 and was wondering if they would make this task easier.
I was also wondering how I can make the test specific MEDIA_ROOT a temporary file, currently I have a regular MEDIA_ROOT called "media" and a testing MEDIA_ROOT called "media_test" and I use rmtree in setup and tearDown of every test class that involves the media directory. The way I specify which MEDIA_ROOT to use is in my test.py settings file, currenly I just have:
MEDIA_ROOT = normpath(join(DJANGO_ROOT, 'media_test'))
Is there a way I can set MEDIA_ROOT to a temporary directory named "media" instead?
This question is a bit old, my answer is from Django 2.0 and Python 3.6.6 or later. Although I think the technique works on older versions too, YMMV.
I think this is a much more important question than it gets credit for! When you write good tests, its only a matter of time before you need to whip up test files, or generate test files. Either way, your in danger of polluting the File System of your server or developer machine. Neither is desirable!
I think the write up on this page is a best-practice. I'll copy/paste the code snippet below if you don't care about the reasoning (more notes afterwards):
----
First, let’s write a basic, really basic, model
from django.db import models
class Picture(models.Model):
picture = models.ImageField()
Then, let’s write a really, really basic, test.
from PIL import Image
import tempfile
from django.test import TestCase
from .models import Picture
from django.test import override_settings
def get_temporary_image(temp_file):
size = (200, 200)
color = (255, 0, 0, 0)
image = Image.new("RGBA", size, color)
image.save(temp_file, 'jpeg')
return temp_file
class PictureDummyTest(TestCase):
#override_settings(MEDIA_ROOT=tempfile.TemporaryDirectory(prefix='mediatest').name)
def test_dummy_test(self):
temp_file = tempfile.NamedTemporaryFile()
test_image = get_temporary_image(temp_file)
#test_image.seek(0)
picture = Picture.objects.create(picture=test_image.name)
print "It Worked!, ", picture.picture
self.assertEqual(len(Picture.objects.all()), 1)
----
I made one important change to the code snippet: TemporaryDirectory().name. The original snippet used gettempdir(). The TemporaryDirectory function creates a new folder with a system generated name every time its called. That folder will be removed by the OS - but we don't know when! This way, we get a new folder each run, so no chance of name conflicts. Note I had to add the .name element to get the name of the generated folder, since MEDIA_ROOT has to be a string. Finaly, I added prefix='mediatest' so all the generated folders are easy to identify in case I want to clean them up in a script.
Also potentially useful to you, is how the settings over-ride can be easy applied to a test class, not just one test function. See this page for details.
Also note in the comments after this article some people show an even easier way to get a temp file name without worrying about media settings using NamedTemporaryFile (only valid for tests that don't use Media settings!).
The answer by Richard Cooke works but leaves the temporary directories lingering in the file system, at least on Python 3.7 and Django 2.2. This can be avoided by using a combination of setUpClass, tearUpClass and overriding the settings in the test methods. For example:
import tempfile
class ExampleTestCase(TestCase):
temporary_dir = None
#classmethod
def setUpClass(cls):
cls.temporary_dir = tempfile.TemporaryDirectory()
super(ExampleTestCase, cls).setUpClass()
#classmethod
def tearDownClass(cls):
cls.temporary_dir = None
super(ExampleTestCase, cls).tearDownClass()
def test_example(self):
with self.settings(MEDIA_ROOT=self.temporary_dir.name):
# perform a test
pass
This way the temporary files are removed right away you don't need to worry about the name of the temporary directory either. (Of course, if you want you can still use the prefix argument in calling tempfile.TemporaryDirectory)
One solution I have found that works is to simply delete it in setUp / tearDown, I would prefer finding some way to make it automatically apply to all tests instead of having to put the logic in every test file that involves media, but I have not figured out how to do that yet.
The code I use is:
from shutil import rmtree
from django.conf import settings
from django.test import TestCase
class MyTests(TestCase):
def setUp(self):
rmtree(settings.MEDIA_ROOT, ignore_errors=True)
def tearDown(self):
rmtree(settings.MEDIA_ROOT, ignore_errors=True)
The reason I do it in both setUp and tearDown is because if I only have it in setUp I might end up with a lingering media_test directory, and even though it won't be checked in to GitHub by accident (it's in the .gitignore) it still takes up unnecessary space in my project explorer and I just prefer not having it sit there taking up space. If I only have it in tearDown then I risk causing problems if I quit out of the tests part way through and it tries to run a test that involves media while the media from the terminated test still lingers.
Something like that?
TESTING_MODE = True
...
MEDIA_ROOT = os.path.join(DJANGO_ROOT, 'media_test' if TESTING_MODE else 'media')

Cannot import file in Python/Django

I'm not sure what's going on, but on my own laptop, everything works okay. When I upload to my host with Python 2.3.5, my views.py can't find anything in my models.py. I have:
from dtms.models import User
from dtms.item_list import *
where my models, item_list, and views files are in /mysite/dtms/
It ends up telling me it can't find User. Any ideas?
Also, when I use the django shell, I can do "from dtms.models import *" and it works just fine.
Okay, after doing the suggestion below, I get a log file of:
syspath = ['/home/victor/django/django_projects', '/home/victor/django/django_projects/mysite']
DEBUG:root:something <module 'dtms' from '/home/victor/django/django_projects/mysite/dtms/__init__.pyc'>
DEBUG:root:/home/victor/django/django_projects/mysite/dtms/__init__.pyc
DEBUG:root:['/home/victor/django/django_projects/mysite/dtms']
I'm not entirely sure what this means - my file is in mysite/dtms/item_list.py. Does this mean it's being loaded? I see the dtms module is being loaded, but it still can't find dtms.models
The fact that from X import * works does not guarantee that from X import Wowie will work too, you know (if you could wean yourself away from that import * addiction you'd be WAY happier on the long run, but, that's another issue;-).
My general advice in import problems is to bracket the problematic import with try/except:
try:
from blah import bluh
except ImportError, e:
import sys
print 'Import error:', e
print 'sys.path:', sys.path
blah = __import__('blah')
print 'blah is %r' % blah
try:
print 'blah is at %s (%s)' % (blah.__file__, blah.__path__)
except Exception, e:
print 'Cannot give details on blah (%s)' % e
and the like. That generally shows you pretty quickly that your sys.path isn't what you thought it would be, and/or blah is at some weird place or with weird path, and the like.
To check your sys.path you can do what Alex said, but instead of using print you can use the logging module:
import logging
LOG_FILENAME = '/tmp/logging_example.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG,)
logging.debug('This message should go to the log file')
Make sure your project (or the folder above your "dtms" app) is in python's module search path.
This is something you may need to set in your web server's configuration. The reason it works in the django shell is probably because you are in your project's folder when you run the shell.
This is explained here if you're using apache with mod_python.
I could be way off with this, but did you set the DJANGO_SETTINGS_MODULE environment variable yet? It affects what you can import. Set it to ".settings". It's also something that gets set when you fire up manage.py, so things work there that won't work in other situations with setting the variable beforehand.
Here's what I do on my system:
export DJANGO_SETTINGS_MODULE=<project name>.settings
or
import os
os.environ['DJANGO_SETTINGS_MODULE']='<project name>.settings'
Sorry if this misses the point, but when I hear of problems importing models.py, I immediately thing of environment variables. Also, the project directory has to be on PYTHONPATH, but you probably already know that.

Categories

Resources