I am writing a Django application where I already have 1 mysql backend db configured in my settings.py.
I know we can add as many db configurations as we want, but that's hard coding which I don't want.. rather can't possibly do as I have to ad-hockly connect to say, about 70-80 different remote machines and query and fetch the result.
I am planning to connect to those machines via their IP address.
I am comparatively new to Django, so I was wondering if we can somehow, make a function which queries the machine by putting in configuration something like :
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dbName',
'USER': 'root',
'PASSWORD': 'root',
'HOST': '',
'PORT': '3306'
}
}
So instead of DATABASES and default, I could configure my function to change the configuration, through an Ajax call or something!
Fortunately, every machine I have to connect to uses mysql so no problem with that.
I looked into this mysql -python connector but not sure if I should use it as I already have MySQLDb installed. I also have to do some raw queries too :|
Could anyone guide me for what would be the best approach for this situation?
P.S : I have also looked at this post which discusses about connecting to remote mysql machine from local. But that's of no help for me :( :(
I believe there are quite a few paths you can take, 3 of which are:
Add all your connections in DATABASES using using - which you said you don't want to do because you have so many conections
You could connect using python's mysql library. If you do this I don't think you'll get to use djangos nice ORM
Look at how django wraps connections to allow you to use their ORM. I did some quick searches about manually establishing a connection using django ORM but didn't find anything. All the answers are in the source code. I believe you can just instantiate your own connections and interact with your remote database using the ORM. I don't have time to look through it now, but everything is in their source
Related
I need to develop a new django project (let's call it new_django) using a SQL Server 2019 database named AppsDB which already hosts another django project (let's call it old_django). The two apps are completely separate from each other. Unfortunately, I can't get a new database for each new django project, so I have to reuse AppsDB. What I don't understand is, how can I tell django not to overwrite the existing auth_... and django_... tables generated by old_django?
My first idea was to use different schemas for the two project, but django doesn't support this with a SQL Server database as far as I know. Some workarounds suggest to change the database default schema for a given user like this anwser. But I won't get a new user for every project either. And relying on manually changing the db schema every time before I migrate something will most certainly cause a mess at some point.
I'm stuck with the current setup and would like to know if anyone has come up with a more elegant solution or different approach to solve my problem?
Any help is much appreciated!
All you need to do is to create a new database in mssql server and then point your django application on the database server like this below.
DATABASES = {
'default': {
'ENGINE': 'mssql',
'NAME': 'YOU_DATABASE_NAME',
'USER': 'DB_USER',
'PASSWORD': 'DB_PASSWORD',
'HOST': 'YOUR_DATABASE_HOST',
'PORT': '',
'OPTIONS': {
'driver': 'ODBC Driver 13 for SQL Server',
},
}
}
I'm already deploying my django app and I'm using postgresql as my server and I used heroku for hosting my app. However, I don't know what I should place in my host instead of using localhost.
note: this works perfectly if I run it locally.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'trilz',
'USER': 'postgres',
'PASSWORD': 'franz123',
'HOST': 'localhost',
'PORT': '5432',
}
}
You should probably use environment variables (the prior link actually uses databases as an example. Using hardcoded values leaves you vulnerable to a bunch of different risks. The Django guide also presents connection files.
After you started to use that, then you need to figure out where you are running your Postgres database. localhost means "my machine" (i.e the same machine as running the Django app.). If you are using some database-as-a-service, they'll expose all the environmental variables you need. If you are using something like Heroku, they will expose environment variables when running the service, that you'll probably use. If you are using a Kubernetes/Docker setup you yourself control, then you'll probably be in control of where the database is running and needs to use the path to that.
For heroku
I've used https://pypi.org/project/dj-database-url/ for a hobby project (which is no longer maintained, but does work, the last time I used it).
My config then looked like this:
DATABASES = {"default": {"ENGINE": "django.db.backends.sqlite3", "NAME": "mydatabase"}}
if "DATABASE_URL" in os.environ:
logger.info("Adding $DATABASE_URL to default DATABASE Django setting.")
DATABASES["default"] = dj_database_url.config(conn_max_age=600)
DATABASES["default"]["init_command"] = "SET sql_mode='STRICT_TRANS_TABLES'"
That gives you a working Sqlite3 version if no URL is added. You can use something else if you'd like. Heroku exposes an environment variable called DATABASE_URL that links to the database you configured in Heroku, which you catch in if "DATABASE_URL" in os.environ:, and then subsequently use. Did this provide a sufficient answer?
I am trying to set up a website using the Django Framework. Because of it's convenience, I had choosen SQLite as my database since the start of my project. It's very easy to use and I was very happy with this solution.
Being a new developer, I am quite new to Github and database management. Since SQLite databases are located in a single file, I was able to push my updates on Github until that .db file reached a critical size larger than 100MB. Since then, it seems my file is too large to push on my repository (for others having the same problem I found satisfying answers here: GIT: Unable to delete file from repo).
Because of this problem, I am now considering an alternative solution:
Since my website will require users too interact with my database (they are expected post a certain amount data), I am thinking about switching SQLite for MySQL. I was told MySQL will handle better the user inputs and will scale more easily (I dare to expect a large volume of users). This is the first part of my question. Is switching to MySQL after having used SQLite for a while a good idea/good practice or will it lead to migration problems?
If the answer to that first question is yes, then I have other questions about how to handle this change. Since SQLite is serverless, I will have to set up a new server for MySQL. Will I be able to access my data remotely with that server? Since I used to push my database on my Github repository, this is where I use to get my data from when I wanted to work remotely. Will there be a way for me to host my data on a server (hopefully for free) and fetch it the same way I fetch my code on Github?
Thank you very much for your help and I hope you have a nice day.
First of all, you shouldn't be uploading any sensitive data to your repository. That includes database passwords, Django's secret key or the database itself in the case of SQLite.
Answering your first question, there shouldn't be any problem switching from SQLite to MySQL. Django handles migrations exceptionally and SQLite has less features than MySQL. To migrate your data to a mysql database you can use django's dumpdata and loaddata.
Now, your second question is a bit more complicated. You can always expose your database to the Internet, but that is usually not a good idea unless you know exactly what you're doing and know how to secure it properly. If you go this way though, you can just change the database parameters in your settings file to point to your MySQL database's public IP and add the db name, user and password.
My recommendation though is to have one database for development in your dev PC and another in your production server that is behind a firewall and can only be accessed through localhost. I don't think you need the db in your dev pc to be always up to date, if you have some sample data that should be enough.
So, instead of writing sensitive data into the settings file you can have a secrets.json file in the root of your project that looks like this:
{
"secret_key": "YOURSUPERSECRETKEY",
"debug": true, TRUE IN YOUR DEV PC, FALSE IN YOUR PROD SERVER
"allowed_hosts": ["127.0.0.1" , "localhost", "YOUR"],
"db_name": "YOURDBNAME",
"db_user": "YOURDBUSER",
"db_password": "YOURDBPASSWORD",
"db_host": "localhost",
"db_port": 3306
}
This file should be included in your .gitignore so it doesn't get pushed to your repository and you would have one in your local pc and another one with different settings in your production server (you can use vi or nano to create the file).
Then in your settings.py file you can do the following:
import json
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
try:
with open(os.path.join(BASE_DIR, 'secrets.json')) as handle:
SECRETS = json.load(handle)
except IOError:
SECRETS = {}
SECRET_KEY = SECRETS['secret_key']
ALLOWED_HOSTS = SECRETS['allowed_hosts']
DEBUG = SECRETS['debug']
...
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': SECRETS['db_name'],
'USER': SECRETS['db_user'],
'PASSWORD': SECRETS['db_password'],
'HOST': SECRETS['db_host'],
'PORT': SECRETS['db_port'],
}
}
I am building a Django project that uses a relational DB (for development purposes SQLite) and an non-relational DB (OrientDB). This is my first time a using non-relational DB and I m having difficulty getting it set up with Django.
The use of OrientDB in my project is solely to keep track of friend relationships and friend-of-friend relationships, while all other user data is being stored in my relational DB.
I know I need to register the DB in my settings file. I am trying to do something like this:
#setting.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
},
'friends': {
'NAME': 'friends',
'ENGINE': 'django.db.backends.orientdb',
'USER': 'root',
'PASSWORD': 'hello',
'HOST': '',
'PORT': '2480',
}
}
When I do this, however, I get the error:
No module named 'django.db.backends.orientdb'
Is this backend module something I have to create myself or can I manually connect to the DB in my code whenever I need something specific done? For example, whenever someone creates a new user in my SQLite DB, can I use a Signal post_save to
connect to OrientDb,
create a friend instance in Orient DB, and
disconnects from OrientDB?
It seems like there ought to be a much cleaner way of doing this.
This is almost certainly something you'll need to build yourself, though your use case doesn't sound like it requires a whole Django backend. A few manual queries might be enough.
Django officially supports PostgreSQL, MySQL, SQLite, and Oracle. There are third-party backends for SAP SQL Anywhere, IBM DB2, Microsoft SQL Server, Firebird, and ODBC.
There is an abandoned project that attempted to provide an OrientDB backend for Django, but it hasn't been updated in quite a long time and likely needs a lot of love:
This project isn't maintained anymore, feel free to fork and keep it alive.
No matter how you choose to proceed you should probably take a look at OrientDB's Python library.
I'm trying to implement a failover strategy when my MySQL backend is down in Celery.
I found in this other stack overflow answer that failover is made possible in SQLAlchemy. However, I couldn't write the same behavior in Celery using sqlalchmey_engine_options
__app.conf.result_backend = 'db+mysql://scott:tiger#localhost/foo'
__app.conf.sqlalchmey_engine_options = {
'connect_args': {
'failover': [{
'user': 'root',
'password': 'password',
'host': 'http://other_db.com',
'database': 'dbname'
}]
}
}
What I'm trying to do is if the first backend scott:tiger does not respond, then it switches to root:password backend.
There is definitely more than one way to achieve failover. You could start with simple try..except and handle situation when your prefered backend is not responding, in simplest (and probably not very pythonic) way you could try something like this:
try:
# initialise your SQL here and also set the connection up
except:
# initialise your backup SQL here
You could also move your backend selection to the infrastructure so it's transparent from your application perspective, i.e. by using session pooling system (I am not MySQL user but in PostgreSQL world we have pgpool).
--- edit ---
I realised you probably want to have your database session and connection handled by celery itself. So very likely above does not answer your question directly, in my simple project I initialise database connection within tasks that require it as in my particular case most tasks do not require database at all.