mysql,how to back up data to another machine - python

I have a large mysql database in server_1(OS is Windows), and I want to copy all the data in server_1 to server_2(OS is Centos). I tried export the data in server_1 to sql file and source the sql file in server_2, but it costs a lot of time.
I think write code(Pandas) to copy data is a choice, but the data is very large and server_1 and server_2 are not in the sam LAN(they have private IP), consider network congestion,maybe it is not a good choice.
Help you can put forward a good solution. Thanks!

take backup from your windows machine by mysqldump from command line:
mysqldump -R --triggers --events -uroot -p<root_pass> --all-databases > c:/backup/mybackup.sql
Now move this backup to your centos machine, you can take help of winscp (you can archive if required):
Now restore data by below command:
mysql -uroot -p<root_pass> < /backup_path/mybackup.sql
Update1
single db backup:
mysqldump -R -uroot -proot_pass db1 > c:/backup/db1.sql
multiple db backup:
mysqldump -R -uroot -proot_pass -B db1 db2 db3 > c:/backup/db1_2_3.sql
Single/multiple tables backup:
mysqldump -uroot -proot_pass db1 tbl1 tbl2 tbl3 > c:/backup/db1_tbl_1_2_3.sql
Further as your db size is 1 TB, which will take time even by mysqldump, so you can also simply copy binary even it is not a clean procedure but you can use it.
Step1: Stop your mysql service.
Step2: archive your mysql data directory and move to target machine.
Step3: Stop mysql service on taget machine and take backup of all files which exist in your mysql data directory and clean from here.
Step4: copy all data under mysql directory from backup to your target mysql directory.
Step5: change permission of these copied file under mysql directory by below command.
$ chown -R mysql.mysql /var/lib/mysql
Note: Assuming your data directory is /var/lib/mysql
Step6: Start your mysql service.
Note: May be you are getting few warnings in your mysql log file but mysql should work fine.

Related

Read data tables from an SQL file containing an entire database [duplicate]

How can I import a database with mysql from terminal?
I cannot find the exact syntax.
Assuming you're on a Linux or Windows console:
Prompt for password:
mysql -u <username> -p <databasename> < <filename.sql>
Enter password directly (not secure):
mysql -u <username> -p<PlainPassword> <databasename> < <filename.sql>
Example:
mysql -u root -p wp_users < wp_users.sql
mysql -u root -pPassword123 wp_users < wp_users.sql
See also:
4.5.1.5. Executing SQL Statements from a Text File
Note: If you are on windows then you will have to cd (change directory) to your MySQL/bin directory inside the CMD before executing the command.
Preferable way for windows:
Open the console and start the interactive MySQL mode
use <name_of_your_database>;
source <path_of_your_.sql>
mysql -u <USERNAME> -p <DB NAME> < <dump file path>
-u - for Username
-p - to prompt the Password
Eg. mysql -u root -p mydb < /home/db_backup.sql
You can also provide password preceded by -p but for the security reasons it is not suggestible. The password will appear on the command itself rather masked.
Directly from var/www/html
mysql -u username -p database_name < /path/to/file.sql
From within mysql:
mysql> use db_name;
mysql> source backup-file.sql
Open Terminal Then
mysql -u root -p
eg:- mysql -u shabeer -p
After That Create a Database
mysql> create database "Name";
eg:- create database INVESTOR;
Then Select That New Database "INVESTOR"
mysql> USE INVESTOR;
Select the path of sql file from machine
mysql> source /home/shabeer/Desktop/new_file.sql;
Then press enter and wait for some times if it's all executed then
mysql> exit
From Terminal:
mysql -uroot -p --default-character-set=utf8 database_name </database_path/database.sql
in the terminal type
mysql -uroot -p1234; use databasename; source /path/filename.sql
Below command is working on ubuntu 16.04, I am not sure it is working or not other Linux platforms.
Export SQL file:
$ mysqldump -u [user_name] -p [database_name] > [database_name.sql]
Example : mysqldump -u root -p max_development > max_development.sql
Import SQL file:
$ mysqldump -u [user_name] -p [database_name] < [file_name.sql]
Example: mysqldump -u root -p max_production < max_development.sql
Note SQL file should exist same directory
I usually use this command to load my SQL data when divided in files with names : 000-tableA.sql, 001-tableB.sql, 002-tableC.sql.
for anyvar in *.sql; do <path to your bin>/mysql -u<username> -p<password> <database name> < $anyvar; done
Works well on OSX shell.
Explanation:
First create a database or use an existing database. In my case, I am using an existing database
Load the database by giving <name of database> = ClassicModels in my case and using the operator < give the path to the database = sakila-data.sql
By running show tables, I get the list of tables as you can see.
Note : In my case I got an error 1062, because I am trying to load the same thing again.
mysql -u username -ppassword dbname < /path/file-name.sql
example
mysql -u root -proot product < /home/myPC/Downloads/tbl_product.sql
Use this from terminal
After struggling for sometime I found the information in https://tommcfarlin.com/importing-a-large-database/
Connect to Mysql (let's use root for both username and password):
mysql -uroot -proot
Connect to the database (let's say it is called emptyDatabase (your should get a confirmation message):
connect emptyDatabase
3 Import the source code, lets say the file is called mySource.sql and it is in a folder called mySoureDb under the profile of a user called myUser:
source /Users/myUser/mySourceDB/mySource.sql
Open the MySQL Command Line Client and type in your password
Change to the database you want to use for importing the .sql file data into. Do this by typing:
USE your_database_name
Now locate the .sql file you want to execute.
If the file is located in the main local C: drive directory and the .sql script file name is currentSqlTable.sql, you would type the following:
\. C:\currentSqlTable.sql
and press Enter to execute the SQL script file.
If you are using sakila-db from mysql website,
It's very easy on the Linux platform just follow the below-mentioned steps, After downloading the zip file of sakila-db, extract it. Now you will have two files, one is sakila-schema.sql and the other one is sakila-data.sql.
Open terminal
Enter command mysql -u root -p < sakila-schema.sql
Enter command mysql -u root -p < sakila-data.sql
Now enter command mysql -u root -p and enter your password, now you have entered into mysql system with default database.
To use sakila database, use this command use sakila;
To see tables in sakila-db, use show tables command
Please take care that extracted files are present in home directory.
First connect to mysql via command line
mysql -u root -p
Enter MySQL PW
Select target DB name
use <db_name>
Select your db file for import
SET autocommit=0; source /root/<db_file>;
commit;
This should do it. (thanks for clearing)
This will work even 10GB DB can be imported successfully this way. :)
In Ubuntu, from MySQL monitor, you have already used this syntax:
mysql> use <dbname>
-> The USE statement tells MySQL to use dbname as the default database for subsequent statements
mysql> source <file-path>
for example:
mysql> use phonebook;
mysql> source /tmp/phonebook.sql;
Important: make sure the sql file is in a directory that mysql can access to like /tmp
If you want to import a database from a SQL dump which might have "use" statements in it, I recommend to use the "-o" option as a safeguard to not accidentially import to a wrong database.
• --one-database, -o
Ignore statements except those those that occur while the default
database is the one named on the command line. This filtering is
limited, and based only on USE statements. This is useful for
skipping updates to other databases in the binary log.
Full command:
mysql -u <username> -p -o <databasename> < <filename.sql>
For Ubuntu/Linux users,
Extract the SQL file and paste it somewhere
e.g you pasted on desktop
open the terminal
go to your database and create a database name
Create database db_name;
Exit Mysql from your terminal
cd DESKTOP
mysql -u root -p db_name < /cd/to/mysql.sql
Enter the password:....
Before running the commands on the terminal you have to make sure that you have MySQL installed on your terminal.
You can use the following command to install it:
sudo apt-get update
sudo apt-get install mysql-server
Refrence here.
After that you can use the following commands to import a database:
mysql -u <username> -p <databasename> < <filename.sql>
The simplest way to import a database in your MYSQL from the terminal is done by the below-mentioned process -
mysql -u root -p root database_name < path to your .sql file
What I'm doing above is:
Entering to mysql with my username and password (here it is root & root)
After entering the password I'm giving the name of database where I want to import my .sql file. Please make sure the database already exists in your MYSQL
The database name is followed by < and then path to your .sql file. For example, if my file is stored in Desktop, the path will be /home/Desktop/db.sql
That's it. Once you've done all this, press enter and wait for your .sql file to get uploaded to the respective database
There has to be no space between -p and password
mysql -u [dbusername] -p[dbpassword] [databasename] < /home/serverusername/public_html/restore_db/database_file.sql
I always use it, it works perfectly. Thanks to ask this question. Have a great day. Njoy :)

how to export the existing documents stored in a database from MongoDb Atlas free cloud version?

I have stored around 500k documents in a collection in a database on the free cluster available through the free cloud version of MongoDB called MongoDB Atlas. I have a total storage of 512 MB available so i need to delete this data however first i need to export the data into a csv file or excel.
I have just installed the MongoDB shell and am using Pymongo driver to connect to the database. I have not installed MongoDB locally on my machine.
I tried using Mongoexport from the command line but it did not work stating that the command is not recognized. I am using python.
Assuming that you already have the Command-Line Tool installed in your system.
mongoexport -h <hostname:port> -d <db name> -c <collection> -u <user> -p <password> -o <output file>
Should export a JSON of your collection.
If you want a Binary export for a future restore in a different infrastructure you should use
mongodump -h <hostname:port> -d <db name> -c <collection> -u <user> -p <password> -o <output file>

PostgresSQLfile permissions error using COPY

I am using python to dump csv data into a database using Psycopg2. I need to give Postgres permission to a specific filepath in order to use the COPY command (documentation: https://www.postgresql.org/docs/10/static/sql-copy.html). I need to give permission to a specific directory path route and file to avoid the following error:
COPY database.table_name FROM '/home/development_user/Documents/App/CSV/filename.csv' delimiter ',' csv header
ERROR: could not open file "/home/development_user/Documents/App/CSV/filename.csv" for reading: Permission denied
To simplify things, want to add postgres to the development user's group. That way, postgres should have the group read permissions the development user can easily define on a path by path basis. I added the postgres user to the development_user group using the following command and validated that it was successful:
$ sudo usermod -a -G development_user postgres
$ groups postgres
postgres : postgres development_user
Here is the output of a permissions path trace using the namei -l [path] commmand
$ namei -l /home/development_user/Documents/App/CSV/filename.csv
drwxr-xr-x root root /
drwxr-xr-x root root home
drwxr-x--- development_user development_user development_user
drwxr-xr-x development_user development_user Documents
drwxr-xr-x development_user development_user App
drwxrwxr-x development_user development_user CSV
-rw-rw-r-- development_user development_user filename.csv
As you can see, anyone in the group development_user should now have read (r) and execute (x) permissions on all directories in the path, and also read and write permissions on the final file. If postgres tried to access the same file as an other user, postgres would be limited by the development_user directory in ability to access.
However, when I try to access the file I get a permissions error as noted above. When I open the development_user directory with other read and execute permissions such as the command below, I am able to read the the file is Postgres:
$ chmod o+rx /home/development
However, I do not want to grant other read and execute permissions for the development_user home directory, and I can't see why postgres user is not able to use the group permissions outlined above to access the same file since I added postgres to the development_user account.
Any ideas if my method to give postgres permissions to read a file by adding it to the user's group is a viable strategy? I do not want to use another solution such as mentioned here: (PostgreSQL - inconsistent COPY permissions errors) or here (Postgres ERROR: could not open file for reading: Permission denied) which advise opening up permissions by setting the file owner to be postgres:postgres. or opening up the directory permissions to widely such as allowing all users to read and execute on the development home directory. I also do not want to create another directory in the system directories and be forced to save files there as suggested here: (psql ERROR: could not open file "address.csv" for reading: No such file or directory).
From the PostgreSQL Manual:
COPY naming a file or command is only allowed to database superusers,
since it allows reading or writing any file that the server has
privileges to access.
So the PostgreSQL user doing the copying must be a database superuser.
You can do this with the ALTER ROLE command:
ALTER ROLE <rolename> WITH SUPERUSER
Also:
COPY with a file name instructs the PostgreSQL server to directly read
from or write to a file. The file must be accessible by the PostgreSQL
user (the user ID the server runs as) and the name must be specified
from the viewpoint of the server.
...
Files named in a COPY command are read or written directly by the
server, not by the client application. Therefore, they must reside on
or be accessible to the database server machine, not the client.
The default system user that PostgreSQL runs on is postgres. Ensure that that user has access to the files you want to copy. You can test this by using the command sudo -i -u postgres to become the postgres user and then trying to view the files.
The way I solved this problem particular to use psychopg2 cursor class function copy_expert (Docs: http://initd.org/psycopg/docs/cursor.html). copy_expert allows you to use STDIN therefore bypassing the need to issue a superuser privilege for the postgres user.
From Postgres COPY Docs (https://www.postgresql.org/docs/current/static/sql-copy.html):
Do not confuse COPY with the psql instruction \copy. \copy invokes
COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in
a file accessible to the psql client. Thus, file accessibility and
access rights depend on the client rather than the server when \copy
is used.
You can also leave the permissions set strictly for access to the development_user home folder and the App folder.
sql = "COPY table_name FROM STDIN DELIMITER '|' CSV HEADER"
self._cursor.copy_expert(sql, open(csv_file_name, "r"))
Slight variation on #jonnyjandles answer, since that shows a mystery self._cursor -- a more typical invocation might be like:
copy_command = f"COPY table_name FROM STDIN CSV HEADER;"
with connection.cursor() as cursor:
cursor.copy_expert(copy_command, open(some_file_path, "r"))

Can't connect MongoDb on AWS EC2 using python

I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.

Django backup strategy with dumpdata and migrations

As in this question, I set up a dumpdata-based backup system for my database. The setup is akin to running a cron script that calls dumpdata and moves the backup to a remote server, with the aim of simply using loaddata to recover the database. However, I'm not sure this plays well with migrations. loaddata now has an ignorenonexistent switch to deal with deleted models/fields, but it is not able to resolve cases where columns were added with one-off defaults or apply RunPython code.
The way I see it, there are two sub-problems to address:
Tag each dumpdata output file with the current version of each app
Splice the fixtures into the migration path
I'm stumped about how to tackle the first problem without introducing a ton of overhead. Would it be enough to save an extra file per backup that contained an {app_name: migration_number} mapping?
The second problem I think is easier once the first one is solved, since the process is roughly:
Create a new database
Run migrations forward to the appropriate point for each app
Call loaddata with the given fixture file
Run the rest of the migrations
There's some code in this question (linked from the bug report) that I think could be adapted for this purpose.
Since these are fairly regular/large snapshots of the database, I don't want to keep them as data migrations cluttering up the migrations directory.
I am taking the following steps to backup, restore or transfer my postgresql database between any instance of my project:
The idea is to keep the least possible migrations as if manage.py makemigrations was run for the first time on an empty database.
Let's assume that we have a working database to our development environment. This database is a current copy of the production database that should not be open to any changes. We have added models, altered attributes etc and those actions have generated additional migrations.
Now the database is ready to be migrated to production which -as stated before- is not open to public so it is not altered in any way. In order to achieve this:
I perform the normal procedure in the development environment.
I copy the project to the production environment.
I perform the normal procedure in the production environment
We make the changes in our development environment. No changes should happen in the production database because they will be overridden.
Normal Procedure
Before anything else, I have a backup of the project directory (which includes a requirements.txt file), a backup of the database and -of course- git is a friend of mine.
I take a dumpdata backup in case I need it. However, dumpdata has some serious limitations regarding content types, permissions or other cases where a natural foreignkey should be used:
./manage.py dumpdata --exclude auth.permission --exclude contenttypes --exclude admin.LogEntry --exclude sessions --indent 2 > db.json
I take a pg_dump backup to use:
pg_dump -U $user -Fc $database --exclude-table=django_migrations > path/to/backup-dir/db.dump
Only if I want to merge existing migrations in one, I delete all migrations from every application.
In my case the migrations folder is a symlink, so I use the following script:
#!/bin/bash
for dir in $(find -L -name "migrations")
do
rm -Rf $dir/*
done
I delete and recreate the database:
For example, a bash script can include the following commands:
su -l postgres -c "PGPASSWORD=$password psql -c 'drop database $database ;'"
su -l postgres -c "createdb --owner $username $database"
su -l postgres -c "PGPASSWORD=$password psql $database -U $username -c 'CREATE EXTENSION $extension ;'"
I restore the database from the dump:
pg_restore -Fc -U $username -d $database path/to/backup-dir/db.dump
If migrations were deleted in step 3, I recreate them in the following way:
./manage.py makemigrations <app1> <app2> ... <appn>
... by using the following script:
#!/bin/bash
apps=()
for app in $(find ./ -maxdepth 1 -type d ! -path "./<project-folder> ! -path "./.*" ! -path "./")
do
apps+=(${app#??})
done
all_apps=$(printf "%s " "${apps[#]}")
./manage.py makemigrations $all_apps
I migrate using a fake migration:
./manage.py migrate --fake
In case something has gone completely wrong and everything is ***, (this can happen, indeed), I can use the backup to revert everything to its previous working state. If I would like to use the db.json file from step one, it goes like this:
When pg_dump or pg_restore fails
I perform the steps:
3 (delete migrations)
4 (delete and recreate the database)
6 (makemigrations)
and then:
Apply the migrations:
./manage.py migrate
Load the data from db.json:
./manage.py loaddata path/to/db.json
Then I try to find out why my previous effort was not successful.
When the steps are performed successfully, I copy the project to the server and perform the same ones to that box.
This way, I always keep the least number of migrations and I am able to use pg_dump and pg_restore to any box that shares the same project.

Categories

Resources