Context:
I have a table on the database that uses values from an external database. This external database updates its values periodically.
Problem:
In order to update my database everytime i start the server, I want to run a script right after the runserver.
Potential Solution:
I have seen that it is possible to run a script from a certain app, which is something I'm interested in. This is achievable by using the django-extensions:
https://django-extensions.readthedocs.io/en/latest/runscript.html
However, this script only runs with the following command:
python manage.py runscript your_script
Is there any other way to run a script from an app and execute it right after the runserver command? I am open to suggestions!
Thanks in advance
Update
Thanks to #Raydel Miranda for the remarks, I feel i left some information behind.
My goal is, once I start the server I'm planning to open a socket to maintain my database updated.
You can execute the code in the top-level urls.py. That module is imported and executed once.
urls.py
from django.confs.urls.defaults import *
from your_script import one_time_startup_function
urlpatterns = ...
one_time_startup_function()
I would recommend to use something like this, lets say you have the script like this:
# abc.py
from your_app.models import do_something
do_something()
Now you can run this script right after runserver(or any other way you are running the django application) like this:
python manage.py runserver & python manage.py shell < abc.py
FYI, it will only work if you have bash in your terminal (like in ie Linux, MacOs).
Update
After reading you problem carefully, I think running a script after runserver might not be the best solution. As you said:
This external database updates its values periodically.
So, I think you need some sort of perodic task to do this update. You can use cronjob or you can use Celery for this.
Running the script after runserver don't seem a very good idea, the main reason is that you will have a window since the server is running (and available for users) till you finish synchronizing your data. Also if you synchronize using a script after runserver you won't get updates from the external db after that.
The best solution for this is to configure multiple databases, you can use the external database with only read access. This way your views will provide really updated data.
On the other hand ...
If want use something like a script is better to write a Django custom command (this way you don't have to deal with initializing django settings and other issues) and execute it using cron or celery as #ruddra states in his/her answer.
Said this, you should see this: https://docs.djangoproject.com/en/2.1/topics/db/multi-db/
This may help.
you can edit yourapp/apps.py
class MyAppConfig(AppConfig):
name = 'myapp'
def ready(self):
# update my database here
pass
Related
Similar question was asked here, however the solution does not give the shell access to the same environment as the deployment. If I inspect os.environ from within the shell, none of the environment variables appear.
Is there a way to run the manage.py shell with the environment?
PS: As a little side question, I know the mantra for EBS is to stop using eb ssh, but then how would you run one-off management scripts (that you don't want to run on every deploy)?
One of the cases you have to run something once is db schema migrations. Usually you store information about that in the db... So you can use db to sync / ensure that something was triggered only once.
Personally I have nothing against using eb ssh, I see problems with it however. If you want to have CI/CD, that manual operation is against the rules.
Looks like you are referring to WWW/API part of Beanstalk. If you need something that is quite frequent... maybe worker is more suitable? Problem here is that if API goes deployed first you would have wrong schema.
In general you are using EC2, so it's user data stores information that spins up you service. So there you can put your "stuff". Still you need to sync / ensure. Here are docs for beanstalk - for more information how to do that.
Edit
Beanstalk is kind of instrumentation on top of EC2. So there must be a way to work with it, since you have access to user data of that EC2s. No worries you don't need to dig that deep. There is good way of instrumenting your server. It is called ebextensions. It can be used to put files on the server, trigger commands, instrument cron. What ever you want.
You can create ebextension, with container_commands this time Python Configuration Namespaces section. That commands are executed on each deployment. Still, problem is that you need to sync since more then one deployment can go at the same time. Good part is that you can set env in the way you want.
I have no problem to access to the environment variables. How did you get the problem? Try do prepare page with the map.
Suppose I have a python module written to do some clean job and daily maintenance. It has no view or template but simply a command line tool. Is it possible to interact with the models and db regardless of whether the server is on?
Yes you can.
Look into the management commands shell and dbshell
You would just do
python manage.py shell #You can call any method, modify Model objects, ...
and
python manage.py dbshell #Gives direct access to the database via command line
And this does not need the server to be running.
I tried to create a sqlalchemy project in pyramid and when I run the server, I get this error,
Pyramid is having a problem using your SQL database. The problem
might be caused by one of the following things:
1. You may need to run the "initialize_MyProject_db" script
to initialize your database tables. Check your virtual
environment's "bin" directory for this script and try to run it.
2. Your database server may not be running. Check that the
database server referred to by the "sqlalchemy.url" setting in
your "development.ini" file is running.
After you fix the problem, please restart the Pyramid application to
try it again.
when I check my development.ini file the sqlite database is configured as this,
sqlalchemy.url = sqlite:///%(here)s/MyProject.sqlite
What needs to changed in here to configure it correctly?
I run on linux box.
You need to create a database in either sqlite,postgres,or any other,Thereafter go to development.ini file edit sqlalchemy.url = sqlite:///%(here)s/MyProject.sqlite and specify the name of your database,then run the initialize_myproject_db development.ini command.if you are using mysql, that line should be
sqlachemy.uri = mysql://username:password#host/dbname
Just trying Pyramid for the first time, I just faced the same problem, after many command combinations, I just got the solution.
Run from the project root, the command:
initialize_tutorial_db development.ini
Info taken from Wiki2 SQLAlchemy tutorial
It says right there in the first point - you need to run initialize_MyProject_db development.ini to create database.
If that's not the case please post the log from running the server.
I've looked through quite a few questions on this one already and have not found a solution for my particular case. The issue I'm currently facing is my sqlite db file is getting wiped out when I try to save() to insert a new row from an Python script. However, this does not happen when using the same steps from one of my Django apps.
Is there some additional setup I need to have in place in order to use my Django model work with an external Python script that is not mentioned below?
In the Python script I've already added the following to the top of the file:
sys.path.append(my_project_path)
os.environ['DJANGO_SETTINGS_MODULE']='settings'
from django.conf import settings
I've also double-checked on the following items:
My settings module has the absolute path for the NAME of the db.
When I rm the DB file and run python manage.py syncdb the table I
desire is created; the DB file size is approximately 50k after this
initialization.
This is the behavior I'm seeing:
After running Item 2 above, I can successfully create new rows for my table from my Django app.
After running Item 2 above, when use the Django model from the Python script I receive a DatabaseError stating there is "no such table". Additionally, the sqlite db file goes to a zero size when this occurs.
Is there anything I'm missing on being able to use the Django model from another Python script? I'm using this script as a tool for some manual updates and would really like to get it working.
Here's a quick example of my setup:
Directories:
myproject/
myproject/myapp/
myproject/tools/
The Django model resides myproject/myapp/ and the script resides in myproject/tools/. The PYTHONPATH includes myproject/. In the script, I have the following:
import os
import sys
sys.path.append(absolute_path_to_myproject)
os.environ['DJANGO_SETTINGS_MODULE']='settings'
from django.conf import settings
import myapp.models
my_model = myapp.models.my_model(name = 'TestName')
my_model.save()
You were not detailed enough your setup, but it sounds like there is a better way to do it.
Django already manages database connections for you, so there's no need to address the actual sqlite file. Just run your script in a django session using "python manage.py my_script". Here's the way to do it: https://docs.djangoproject.com/en/dev/howto/custom-management-commands/
I'm new to Python (relatively new to programing in general) and I have created a small python script that scrape some data off of a site once a week and stores it to a local database (I'm trying to do some statistical analysis on downloaded music). I've tested it on my Mac and would like to put it up onto my server (VPS with WiredTree running CentOS 5), but I have no idea where to start.
I tried Googling for it, but apparently I'm using the wrong terms as "deploying" means to create an executable file. The only thing that seems to make sense is to set it up inside Django, but I think that might be overkill. I don't know...
EDIT: More clarity
You should look into cron for this, which will allow you to schedule the execution of your Python script.
If you aren't sure how to make your Python script executable, add a shebang to the top of the script, and then add execute permissions to the script using chmod.
Copy script to server
test script manually on server
set cron, "crontab -e" to a value that will test it soon
once you've debugged issues set cron to the appropriate time.
Sounds like a job for Cron?
Cron is a scheduler that provides a way to run certain scripts (apps, etc.) at certain times.
Here is a short tutorial that explains how to set up cron.
See this for more general cron information.
Edit:
Also, since you are using CentOS: if you end up having issues with your script later on... it could partly be caused by SELinux. There are ways to disable SELinux on your server (if you have enough access permissions.) But... there are arguments against disabling SELinux, as well.