Django, store function in database - python

I'm trying to find a way to store a function in a Django Model.
My use case is an emailer system with planned and conditional sending.
"send this email tomorrow if John has not logged in".
Ideally I would like to create emails with a test function field.
def my_test(instance):
if (now() - instance.user.last_login) > timedelta(days=1):
return False
return True
Email(send_later_date=now() + timedelta(days=1),
conditional=my_test)
Here is what I'm thinking about
import importlib
Email(send_later_date=now() + timedelta(days=1),
conditional='my_app.views.my_test')
class Email(models.Model):
send_later_date = models.DateTimeField(null=True, blank=True)
conditional = models.TextField()
def send_email(self):
if self.send_later_date < now():
if not self.conditional:
function_send_email()
else:
function_string = self.conditional
mod_name, func_name = function_string.rsplit('.',1)
mod = importlib.import_module(mod_name)
func = getattr(mod, func_name)
if func():
function_send_email()
How would you do that? I was thinking of storing the function name in a Text field. How would you run it once needed? ImportLib seems interesting...

Storing the function name is a valid aproach, since it is actually the one used by the pickle module. This has the benefit (in your case) that if you update the code of a function, it will automatically apply to existing e-mails (but be careful with backward-compatibility).
For convenience and safety (especially if the function name could come from user inputs), you may want to keep all such functions in the same module, and store only the function name in DB, not the full import path. So you could simply import the function repository and read it with getattr. I imagine you will also want to give parameters to these functions. If you don't want to restrict yourself to a given number / order / type of arguments, you could store in DB a dictionary as a JSON string, and expand it with the ** operator.
import my_app.func_store as store
import json
func = getattr(store, self.conditional)
params = json.loads(self.conditional_params)
if func(**params):
function_send_email()

Related

Python + Selenium: how to automatically add test name and ids parameter or number to screenshot name?

I am learning how to write tests in Python + Selenium based on Page Object.
In general, the test code itself looks like this:
pytest.mark.parametrize("first_name", [double_first_name, empty_form, long_first_name],
ids=["double_first_name", "empty_form", "long_first_name"])
def test_reg_form_name(browser, first_name):
passport_reg_page = RegForm(browser)
passport_reg_page.go_to_site()
passport_reg_page.reg_page()
passport_reg_page.reg_first_name(first_name)
passport_reg_page.reg_button()
passport_reg_page = RegFormExpectations(browser)
passport_reg_page.reg_expect_name()
assert rt_passport_reg_page.reg_expect_name()
browser.save_screenshot('screenshots/test_reg_1.png')
How to write and call a function so that the name of the test + parameter ids is automatically added to the name of the screenshot, or at least the name of the test + the number of each parameter run? For example:
browser.save_screenshot(f'screenshots/{"test_reg_form_name"}+"_"+{"ids"}.png')
or
browser.save_screenshot(f'screenshots/{"test_reg_form_name"}+"_"+{"001,002,003..."}.png')
I've been proposed to try:
def test_name(self, request):
testname = request.node.name
return testname
but got notice "Method 'test_name' may be 'static'"
Also the next method helps to get def name but I see no sense to write its name for further getting it:
browser.save_screenshot(f'screenshots/{test_reg_page.__name__}_1.png')

Python unit testing with PyTest, stuck on more advanced testing

I have been using PyTest for some time now to write some simple tests (like the ones you find in tutorials and youtube video's) and I thought now it was time to start writing actual test for our python scripts. The scripts are way more advanced than any shown in tutorials so I am getting a bit stuck. I do not want the entire correct answer, but rather a nudge in the right direction if possible. Here is my issue:
We have a script that reads a .md text file and converts it to a pdf file based on an external template. Part of the script is here below (I removed most of it because I first just want to have 1 running test)
class DocumentationEngine:
def __init__(self, title, subtitle, series, style='TIIStyle_Digital_Aug_2020', templateFile='template.docet', tableOfContents=True, listOfFigures=False, listOfTables=False):
self.title = title
self.subtitle = subtitle
self.series = series
self.style = style
self.template = {}
self.hasTOC = tableOfContents
self.hasLOF = listOfFigures
self.hasLOT = listOfTables
self.loadTemplate(templateFile)
def loadTemplate(self, file='template.docet'):
with open(file, "r") as templatefile:
lines = templatefile.readlines()
key = "dummy"
value = ""
for line in lines:
line = line.strip()
if line.startswith('[') and line.endswith(']'):
self.template[key] = value
key = line[1:-1]
value = ""
else:
value += line + '\n'
def build(self, versions=[], content='', filename='Documenter\\_Autogenerated'):
document = self.template["doc"]
document = document.replace("%%style%%", self.style)
document = document.replace("%%body%%",
self.buildFirstPage() +
self.buildTableOfContents() +
self.buildListOfFigures() +
self.buildListOfTables() +
self.buildVersionTable(versions, filename) +
self.buildContentPages(content=content) +
self.buildLastPage()
)
return document
def buildLastPage(self):
return self.template["last_page"]
I am trying to write a simple unit test for the buildLastPage method and have been stuck for several days now.
I am not sure whether or not I need to mock the template file, use a fixture and/or if I can actually test only that method with all dependencies.
I started with the following:
from doceng import DocumentationEngine
import pytest
class Test:
def test_buildLastPage(self):
build_last_page = DocumentationEngine()
assert build_last_page.template(1) == 1
which gives me an error regarding 3 required arguments. When adding the arguments like this:
from doceng import DocumentationEngine
import pytest
class Test:
def test_buildLastPage(self, title, subtitle, series):
build_last_page = DocumentationEngine()
assert build_last_page.template(1) == 1
which gives me an error that the fixture is not found.
I added a fixture in conftest.py file like this:
import pytest
from doceng import DocumentationEngine
#pytest.fixture
def title(title):
return title("test")
which will get me another error, recursive dependency involving fixture 'title' detected
I'm quite stuck so any nudge in the right direction for a newbie would be highly appreciated
The error of the fixtures is regarding your test function test_buildLastPage. The way you are using it, it only needs the self argument.
A test function in pytest without any decorators always expects to find fixtures, that have the same name as the arguments. You did not define any fixtures and also do not use the arguments in your function. Therefore, you can remove them safely.
The actual error points DocumentationEngine(). The class expect 3 arguments when initializing the object. You set no arguments. Check your __init__ function again to find the proper arguments.

refactoring function to have a robust design

i am having a simple app example here:
say i have this piece of code which handles requests from user to get a list of books stored in a database.
from .handlers import all_books
#apps.route('/show/all', methods=['GET'])
#jwt_required
def show_books():
user_name = get_jwt_identity()['user_name']
all_books(user_name=user_name)
and in handlers.py i have :
def all_books(user_name):
db = get_db('books')
books = []
for book in db.books.find():
books.append(book)
return books
but while writing unit tests i realised if i use get_db() inside all_books() it would be harder to unit test the method.
so i thought this would be the good way.
from .handlers import all_books
#apps.route('/show/all', methods=['GET'])
#jwt_required
def show_books():
user_name = get_jwt_identity()['user_name']
db = get_db('books')
collection = db.books
all_books(collection=collection)
def all_books(collection):
books = []
for book in collection.find():
books.append(book)
return books
i want to know what is the good design to use?
have all code doing one thing at one place like the first example or the second example is good.
To me first one seems more clear as it has all related logic at one place. but its easier to pass a fake collection in second case to unit test it.
you should probably use the mock library see: https://docs.python.org/3/library/unittest.mock.html#quick-guide
(if you use python2 you will need pip install mock)
def test_it():
from unittest.mock import Mock,patch
with patch.object(get_db,'function',Mock(return_value=Mock(books=[1,2,3]))) as mocked_db:
x = get_db("ASDASD")
console.log(x.books)
# you can also do cool stuff like this
assert mocked_db.calledwith("ASDASD")
of coarse for yours you will have to construct a slightly more complex object
my_mocked_get_db = Mock(return_value=Mock(books=Mock(find=[1,2,3,4])))
with patch.object(get_db,'function',my_mocked_get_db) as mocked_db:
x = get_db("ASDASD")
print(x.books.find())

Access current schema context

Let's say we have such situation:
from django_tenants.utils import schema_context
def do_something(context):
print("do_something")
def my_callable():
tenant = "db_tenant"
with schema_context(tenant):
context = {"a": 1, "b": 2}
do_something(context)
my_callable()
And question is: It's possible to access current tenant name in do_something function without passing it as parameter or store it as global variable
I found a solution but i don't know if it's stable. So, current tenant name (or schema_name) can be accessed through django.db connection as follow:
from django.db import connection
schema_name = connection.schema_name
No this is not possible, or at least will require some magical engineering to do so.
I'm assuming the only reason you wouldn't want to pass it as a parameter is because other things may be calling do_something as well that wouldn't be passing tenant as a parameter. In this case do:
def do_something(context, tentant=None):
if tenant:
print (tenant)
else:
print ("do_something")
Now you can call do_something with do_something(context, tenant='Bob') or do_something(context) and either will be fine.

How to deal with globals in modules?

I try to make a non blocking api calls for OpenWeatherMap, but my problem is:
When i was doing tests on the file, and run it, the global api was taking effect, but when importing the function, global dont work anymore, and api dident change: api = ""?
Just after declaring the function i put global api, and then when I use print 'The API link is: ' + api I get the exact api, but global dident took effect!
Here is the code: https://github.com/abdelouahabb/tornadowm/blob/master/tornadowm.py#L62
What am I doing wrong?
When I import the file:
from tornadowm import *
forecast('daily', q='london', lang='fr')
The API link is: http://api.openweathermap.org/data/2.5/forecast/daily?lang=fr&q=london
api
Out[5]: ''
When executing the file instead of importing it:
runfile('C:/Python27/Lib/site-packages/tornadowm.py', wdir='C:/Python27/Lib/site-packages')
forecast('daily', q='london', lang='fr')
The API link is: http://api.openweathermap.org/data/2.5/forecast/daily?lang=fr&q=london
api
Out[8]: 'http://api.openweathermap.org/data/2.5/forecast/daily?lang=fr&q=london'
Edit: here is the code, if the Git got updated:
from tornado.httpclient import AsyncHTTPClient
import json
import xml.etree.ElementTree as ET
http_client = AsyncHTTPClient()
url = ''
response = ''
args = []
link = 'http://api.openweathermap.org/data/2.5/'
api = ''
result = {}
way = ''
def forecast(way, **kwargs):
global api
if way in ('weather', 'forecast', 'daily', 'find'):
if way == 'daily':
way = 'forecast/daily?'
else:
way += '?'
for i, j in kwargs.iteritems():
args.append('&{0}={1}'.format(i, j))
a = ''.join(set(args))
api = (link + way + a.replace(' ', '+')).replace('?&', '?')
print 'The API link is: ' + api
def handle_request(resp):
global response
if resp.error:
print "Error:", resp.error
else:
response = resp.body
http_client.fetch(api, handle_request)
else:
print "please put a way: 'weather', 'forecast', 'daily', 'find' "
def get_result():
global result
if response.startswith('{'):
print 'the result is JSON, stored in the variable result'
result = json.loads(response)
elif response.startswith('<'):
print 'the result is XML, parse the result variable to work on the nodes,'
print 'or, use response to see the raw result'
result = ET.fromstring(response)
else:
print '''Sorry, no valid response, or you used a parameter that is not compatible with the way!\n please check http://www.openweathermap.com/api for more informations''
It's the side effect of using global.
When you do from tornadowm import * your forecast() function is, we could say metaphorically, "on its own" and is not "hard-linked" to your global space anymore.
Why? Because any effect you make on your global api will "end" with your function, and the definition of api = "" in your global space will take precedence.
Also, as a side note, it's not considered a good practice to use from something import *. You should do from tornadowm import forecast or even better, import tornadown and then use tornadowm.forecast().
OR
Even better, I just noticed your forecast() function doesn't return anything. Which technically makes it not a function anymore, but a procedure (a procedure is like a function but it returns nothing, it just "does" stuff).
Instead of using a global, you should define api in this function and then return api from it. Like this:
def forecast(blablabla):
api = "something"
blablabla
return api
And then
import tornadowm
api = tornadown.forecast(something)
And you're done.
Globals are global only to the module they're defined in. So, normally, you would expect tornadowm.api to be changed when you call forecast, but not api in some other namespace.
The import * is contributing to your understanding of the problem. This imports api (among other names) into the importing namespace. This means that api and tornadowm.api initially point to the same object. But these two names are not linked in any way, and so calling forecast() changes only tornadowm.api and now the two names point to different objects.
To avoid this, don't use import *. It is bad practice anyway and this is just one of the reasons. Instead, import tornadowm and access the variable in the importing module as tornadowm.api.
I'm afraid this is because global is coupled within module, by the time you from tornadowm import * you have imported the api name, but the global api won't take any effects within another module.

Categories

Resources