I am using flask sqlalchemy to create db which in turns create a app.db files to store tables and data. Now for the backup it should be simple to just take a copy of app.db somewhere in the server. But suppose while the app is writing data to app.db and we make a copy at that time then we might have inconsistent app.db file.
How do we maintain the consistency of the backup. I can implement locks to do so. But I was wondering on standard and good solutions for database backup and how is this implemented in python.
SQLite has the backup API for this, but it is not available in the built-in Python driver.
You could
use the APSW library for the backup; or
execute the .backup command in the sqlite3 command-line shell; or
run BEGIN IMMEDIATE to prevent other connections from writing to the DB, and copy the file(s).
I wanted also create backups of my SQL file, which stores data from my Flask.
Since my Flask web don't have too many users, and the database don't carry more than 4 small-medium size tables I did the following:
#Using Linux cmd
cd pathOfMyDb
nano backupDbRub.sh # backupDBRun is the name of the file that will be called periodically
Nano text editor will open a file and you can write the following
#!/bin/bash
current_time=$(date "+%Y.%m.%d-%H.%M.%S");
sep='"';
file_name = '_site.db"';
functionB = '.backup ';
queryWrite = "$functionB$sep$current_time$file_name";
echo "$queryWrite";
sqlite3 yoursqlitefile.db "$queryWrite"
Basically with this code we create a function ($queryWrite) which will do a copy of the current database with a timestamp. Now in Unix you will have to modify the crontab file.
Write in the Unix terminal
crontab -e
in this new file , as explained in this post you should add the periodicity (in my case , for testing purposed, I wanted to run it every minute to check it is working.
* * * * * /backupDbRub.sh
Think that the copies will appear in the folder you choose in the $querywrite sentence. So you can find them there.
In case you want to test that the script works, you must go to the folder where this script and run
/backupDbRub.sh
This should print the 'echo' sentence called in your file. Sometimes you did not provide permissions in this file, so you should add permissions with 'chmod -x', or whatever other you need to set up.
Related
I've made a really simple single-user database application with web2py to be deployed to a desktop machine. The reason I choose web2py is due to its simplicity and its not intrusive web server.
My problem is that I need to migrate an existing database from another application that I've just preprocessed and prepared into a csv file that can be now perfectly imported into web2py's sqlite database.
Now, I have a problem with a 'upload' field in one of the tables, which correspond to a small image, I've formated that field into de the csv, with the name of the corresponding .jpg file that I extrated from the original database. The problem is that I have not managed how to insert these correctly into the upload folder, as the web2py engine automatically changes the filename of the users' uploads to a safe format, and copying my files straight to the folder does not work.
My question is, does anyone know a proper way to include this image collection into the uploads folder?. I don't know if there is a way to disable this protection or if I have to manually change their name to a valid hash. I've also considered the idea of coding an automatic insert process into the database...
Thanks all for you attention!
EDIT (a working example):
An example database:
db.define_table('product',
Field('name'),
Field('color'),
Field('picture', 'upload'),
)
Then using the default appadmin module from my application I import a csv file with entries of the form:
product.name,product.color,product.picture
"p1","red","p1.jpg"
"p2","blue","p2.jpg"
Then in my application I have the usual download function:
def download():
return response.download(request, db)
Which I call requesting the images uploaded into the database, for example, to be included into a view:
<img src="{{=URL('download', args=product.picture)}}" />
So my problem is that I have all the images corresponding the database records and I need to import them into my application, by properly including them into the uploads folder.
If you want the files to be named via the standard web2py file upload mechanism (which is a good idea for security reasons) and easily downloaded via the built-in response.download() method, then you can do something like the following.
In /yourapp/controllers/default.py:
def copy_files():
import os
for row in db().select(db.product.id, db.product.picture):
picture = open(os.path.join(request.folder, 'private', row.picture), 'rb')
row.update_record(picture=db.product.picture.store(picture, row.picture))
return 'Files copied'
Then place all the files in the /yourapp/private directory and go to the URL /default/copy_files (you only need to do this once). This will copy each file into the /uploads directory and rename it, storing the new name in the db.product.picture field.
Note, the above function doesn't have to be a controller action (though if you do it that way, you should remove the function when finished). Instead, it could be a script that you run via the web2py command line (needs to be run in the app environment to have access to the database connection and model, as well as reference to the proper /uploads folder) -- in that case, you would need to call db.commit() at the end (this is not necessary during HTTP requests).
Alternatively, you can leave things as they are and instead (a) manage uploads and downloads manually instead of relying on web2py's built-in mechanisms, or (b) create custom_store and custom_retrieve functions (unfortunately, I don't think these are well documented) for the picture field, which will bypass web2py's built-in store and retrieve functions. Life will probably be easier, though, if you just go through the one-time process described above.
I know you can easily import a YAML file into a Django database (in order to populate the database before starting the project for instance) but how can I do the opposite (ie save the complete database into a single .yaml file).
I read there is a way to export one single table into a file:
YAMLSerializer = serializers.get_serializer("yaml")
yaml_serializer = YAMLSerializer()
with open("file.yaml", "w") as out:
yaml_serializer.serialize(SomeModel.objects.all(), stream=out)
but I need to do it on the complete database (which has many tables with complex relations between each ones).
I could write a script to do that for me, but I don't want to redo something which has probably been done already, and I wouldn't know how to do it the better way so that Django has no difficulties to read it after.
So far, I've been working on a SQLITE3 database engine.
Any ideas?
You need the dumpdata management command.
pip install pyyaml
python manage.py dumpdata --format=yaml > /path/to/dump_file.yaml
I need to copy an existing neo4j database in Python. I even do not need it for backup, just to play around with while keeping the original database untouched. However, there is nothing about copy/backup operations in neo4j.py documentation (I am using python embedded binding).
Can I just copy the whole folder with the original neo4j database to a folder with a new name?
Or is there any special method available in neo4j.py?
Yes,
you can copy the whole DB directory when you have cleanly shut down the DB for backup.
I have a requirement where I need to insert the postgres data into mysql. Suppose I have user table in postgres. I have user table also in mysql. I tried to do something like this:
gts = 'cd '+js_browse[0].js_path #gts prints correct folder name/usr/local/myfolder_name
os.system(gts)
gts_home = 'export GTS_HOME='+js_browse[0].js_path
os.system(gts_home)
tt=gts+'&& sh bin/admin.sh User --input-dir /tmp/import'
#inside temp/import i import store my postgres user table data
#bin is the folder inside myfolder_name
In mysql if I use the command it works perfectly fine:
cd /usr/local/myfolder_name
bin/admin.sh User -account=1 user=hamid -create'
I am unable to store data inside mysql this way. Any help shall be appreciated.
You don't really give us much information. And why would go from postgres to mysql?
But you can use one of these tools - I have seen people talk good about them
pg2mysql or pgs2sql
Hope it works out.
PostgreSQL provides possibility to dump data into the CSV format using COPY command.
The easiest path for you will be to spend time once to copy schema objects from PostgreSQL to MySQL, you can use pg_dump -s for this on the PostgreSQL side. IMHO, it will be the biggest challenge to properly move schemas.
And then you should import CSV-formatted data dumps into the MySQL, check this for reference. Scrolling down to the comments you'll find recipes for Windows also. Something like this should do the trick (adjust parameters accordingly):
LOAD DATA LOCAL INFILE C:\test.csv
INTO TABLE tbl_temp_data
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
I'm using Django to build a website with a MySQL (MyISAM) backend.
The database data is imported from a number of XML files that an external script process and output as a JSON-file. Whenever a new JSON file differ from the old one, I need to wipe the old MySQL-db and recreate it using manage.py loaddata, (at least that's the easy way to do it, I guess I could check the differences between the JSON files and apply those to the database, but I haven't figured out a good solution for this (I'm neither a very good coder nor a web developer)).
Anyway, the JSON file is around 10 Mb, and ends up being about 21,000 rows of SQL (It's not expected to grow significantly). There are 7 tables, and they all look something like this:
class Subnetwork(models.Model):
SubNetwork = models.CharField(max_length=50)
NetworkElement = models.CharField(max_length=50)
subNetworkId = models.IntegerField()
longName = models.CharField(max_length=50)
shortName = models.CharField(max_length=50)
suffix = models.CharField(max_length=50)
It takes up to a minute (sometimes only 30 seconds) to import it into MySQL. I don't know if this is to be expected from a file of this size? What can I do (if anything) to improve perfomance?
For what it's worth, here's some profiler output https://gist.github.com/1287847
There are a couple of solutions, same decent than others, but here is a workaround to keep your system's "downtime" minimal, without needing to write a db synchronize mechanism (which would probably be a better solution in most times).:
Create a custom settings_build.py file, with from settings import * that chooses a random name for a new db (probably with the date in the db name), creates it by calling mysqladmin, and update the name into DATABASES.
Create a custom django management command (let's call it builddb) by either cloning the loaddata command or calling it, and on successful result, it should write the db name to a dbname text file with one line and executes a shell command that reloads your django (apache/gunicorn/?) server.
Modify your settings.py to load the database name from the text file.
And now run your build process like this:
./manage.py builddb --settings=settings_build
I solved it by exporting the processed XML-files to csv instead of json, and then used a separate script that called mysqlimport to do the importing.