Cannot restore postgres dump on heroku - python

I am using this command to create dump locally
PGPASSWORD=admin pg_dump -h 127.0.0.1 -p 5432 -U postgres --no-owner --no-acl -f database.dump and my is created successfully.
then uploaded this dump to dropbox and make it public with this link http://www.dropbox.com/s/rsqeiuqzusejz5x/database.dump?dl=1 notice I have changed https to http and dl=0 to dl=1 (dl=1 to make it downloadable)
then on my terminal I am running this command heroku pg:backups:restore "http://www.dropbox.com/s/38prh2y7pdwhqqj/database.dump?dl=1" --confirm tranquil-anchorage-39635
but I am getting this error
! An error occurred and the backup did not finish.
!
! pg_restore: error: did not find magic string in file header
! waiting for restore to complete
! pg_restore finished with errors
! waiting for download to complete
! download finished successfully
!
! Run heroku pg:backups:info r010 for more details.
I have tried all the official documentation and various answers but nothing seems to work.

On doing further research I found out that pg_restore command while restoring dump file expects a certain format that must be mentioned while creating the dump file. That is why there was an error pg_restore: error: did not find magic string in file header.
pg_dump -h <localhost> -p <port> -U <username> --format=c <database_name> > daman.dump
after running this command you will be prompt to enter the password for the user.
Notice --format=c in the above command. This will create a dump file in a format that can be restored by pg_restore, also I should mention that the dump file created in this manner is not readable to text-editor like notepad or vscode unlike the in the case when --format=c is not used.
for further details see the documentation here

I found that even with the correct file format, only certain file hosts would work. I tried file.io and filetransfer.io, but neither one seemed to work with Heroku. Only Amazon S3 worked correctly of those three.
Even if you have the correct file format, you'll get this error if you try to host the file in certain places.

Try to run pg_dump with -Fc option (customized format
) as documented in https://devcenter.heroku.com/articles/heroku-postgres-import-export:
pg_dump -Fc --no-acl --no-owner -h localhost -U myuser mydb > mydb.dump

Related

Read data tables from an SQL file containing an entire database [duplicate]

How can I import a database with mysql from terminal?
I cannot find the exact syntax.
Assuming you're on a Linux or Windows console:
Prompt for password:
mysql -u <username> -p <databasename> < <filename.sql>
Enter password directly (not secure):
mysql -u <username> -p<PlainPassword> <databasename> < <filename.sql>
Example:
mysql -u root -p wp_users < wp_users.sql
mysql -u root -pPassword123 wp_users < wp_users.sql
See also:
4.5.1.5. Executing SQL Statements from a Text File
Note: If you are on windows then you will have to cd (change directory) to your MySQL/bin directory inside the CMD before executing the command.
Preferable way for windows:
Open the console and start the interactive MySQL mode
use <name_of_your_database>;
source <path_of_your_.sql>
mysql -u <USERNAME> -p <DB NAME> < <dump file path>
-u - for Username
-p - to prompt the Password
Eg. mysql -u root -p mydb < /home/db_backup.sql
You can also provide password preceded by -p but for the security reasons it is not suggestible. The password will appear on the command itself rather masked.
Directly from var/www/html
mysql -u username -p database_name < /path/to/file.sql
From within mysql:
mysql> use db_name;
mysql> source backup-file.sql
Open Terminal Then
mysql -u root -p
eg:- mysql -u shabeer -p
After That Create a Database
mysql> create database "Name";
eg:- create database INVESTOR;
Then Select That New Database "INVESTOR"
mysql> USE INVESTOR;
Select the path of sql file from machine
mysql> source /home/shabeer/Desktop/new_file.sql;
Then press enter and wait for some times if it's all executed then
mysql> exit
From Terminal:
mysql -uroot -p --default-character-set=utf8 database_name </database_path/database.sql
in the terminal type
mysql -uroot -p1234; use databasename; source /path/filename.sql
Below command is working on ubuntu 16.04, I am not sure it is working or not other Linux platforms.
Export SQL file:
$ mysqldump -u [user_name] -p [database_name] > [database_name.sql]
Example : mysqldump -u root -p max_development > max_development.sql
Import SQL file:
$ mysqldump -u [user_name] -p [database_name] < [file_name.sql]
Example: mysqldump -u root -p max_production < max_development.sql
Note SQL file should exist same directory
I usually use this command to load my SQL data when divided in files with names : 000-tableA.sql, 001-tableB.sql, 002-tableC.sql.
for anyvar in *.sql; do <path to your bin>/mysql -u<username> -p<password> <database name> < $anyvar; done
Works well on OSX shell.
Explanation:
First create a database or use an existing database. In my case, I am using an existing database
Load the database by giving <name of database> = ClassicModels in my case and using the operator < give the path to the database = sakila-data.sql
By running show tables, I get the list of tables as you can see.
Note : In my case I got an error 1062, because I am trying to load the same thing again.
mysql -u username -ppassword dbname < /path/file-name.sql
example
mysql -u root -proot product < /home/myPC/Downloads/tbl_product.sql
Use this from terminal
After struggling for sometime I found the information in https://tommcfarlin.com/importing-a-large-database/
Connect to Mysql (let's use root for both username and password):
mysql -uroot -proot
Connect to the database (let's say it is called emptyDatabase (your should get a confirmation message):
connect emptyDatabase
3 Import the source code, lets say the file is called mySource.sql and it is in a folder called mySoureDb under the profile of a user called myUser:
source /Users/myUser/mySourceDB/mySource.sql
Open the MySQL Command Line Client and type in your password
Change to the database you want to use for importing the .sql file data into. Do this by typing:
USE your_database_name
Now locate the .sql file you want to execute.
If the file is located in the main local C: drive directory and the .sql script file name is currentSqlTable.sql, you would type the following:
\. C:\currentSqlTable.sql
and press Enter to execute the SQL script file.
If you are using sakila-db from mysql website,
It's very easy on the Linux platform just follow the below-mentioned steps, After downloading the zip file of sakila-db, extract it. Now you will have two files, one is sakila-schema.sql and the other one is sakila-data.sql.
Open terminal
Enter command mysql -u root -p < sakila-schema.sql
Enter command mysql -u root -p < sakila-data.sql
Now enter command mysql -u root -p and enter your password, now you have entered into mysql system with default database.
To use sakila database, use this command use sakila;
To see tables in sakila-db, use show tables command
Please take care that extracted files are present in home directory.
First connect to mysql via command line
mysql -u root -p
Enter MySQL PW
Select target DB name
use <db_name>
Select your db file for import
SET autocommit=0; source /root/<db_file>;
commit;
This should do it. (thanks for clearing)
This will work even 10GB DB can be imported successfully this way. :)
In Ubuntu, from MySQL monitor, you have already used this syntax:
mysql> use <dbname>
-> The USE statement tells MySQL to use dbname as the default database for subsequent statements
mysql> source <file-path>
for example:
mysql> use phonebook;
mysql> source /tmp/phonebook.sql;
Important: make sure the sql file is in a directory that mysql can access to like /tmp
If you want to import a database from a SQL dump which might have "use" statements in it, I recommend to use the "-o" option as a safeguard to not accidentially import to a wrong database.
• --one-database, -o
Ignore statements except those those that occur while the default
database is the one named on the command line. This filtering is
limited, and based only on USE statements. This is useful for
skipping updates to other databases in the binary log.
Full command:
mysql -u <username> -p -o <databasename> < <filename.sql>
For Ubuntu/Linux users,
Extract the SQL file and paste it somewhere
e.g you pasted on desktop
open the terminal
go to your database and create a database name
Create database db_name;
Exit Mysql from your terminal
cd DESKTOP
mysql -u root -p db_name < /cd/to/mysql.sql
Enter the password:....
Before running the commands on the terminal you have to make sure that you have MySQL installed on your terminal.
You can use the following command to install it:
sudo apt-get update
sudo apt-get install mysql-server
Refrence here.
After that you can use the following commands to import a database:
mysql -u <username> -p <databasename> < <filename.sql>
The simplest way to import a database in your MYSQL from the terminal is done by the below-mentioned process -
mysql -u root -p root database_name < path to your .sql file
What I'm doing above is:
Entering to mysql with my username and password (here it is root & root)
After entering the password I'm giving the name of database where I want to import my .sql file. Please make sure the database already exists in your MYSQL
The database name is followed by < and then path to your .sql file. For example, if my file is stored in Desktop, the path will be /home/Desktop/db.sql
That's it. Once you've done all this, press enter and wait for your .sql file to get uploaded to the respective database
There has to be no space between -p and password
mysql -u [dbusername] -p[dbpassword] [databasename] < /home/serverusername/public_html/restore_db/database_file.sql
I always use it, it works perfectly. Thanks to ask this question. Have a great day. Njoy :)

How to check the status of docker-compose up -d command

When we run docker-compose up-d command to run dockers using docker-compose.yml file, it starts building images or pulling images from the registry. We can see each and every step of this command on the terminal.
I am trying to run this command from a python script. The command starts successfully but after the command, I do not have any idea of how much the process has been completed. Is there any way I can monitor the status of docker-compose up -d command so that script can let the user (who is using the script) know how much the process has completed or if the docker-compose command has failed due to some reasons.?
Thanks
CODE:
from pexpect import pxssh
session = pxssh.pxssh()
if not session.login(ip_address,<USERNAME>, <PASSWORD>):
print("SSH session failed on login")
print(str(session))
else:
print("SSH session login successfull")
session.sendline("sudo docker-compose up -d")
session.prompt()
resp = session.before
print(resp)
You can view docker compose logs with following ways
Use docker compose up -d to start all services in detached mode (-d)
(you won't see any logs in detached mode)
Use docker compose logs -f -t to attach yourself to the logs of all
running services, whereas -f means you follow the log output and the
-t option gives you nice timestamps (Docs)
credit
EDIT: Docker Compose is now available as part of the core Docker CLI. docker-compose is still supported for now but most documentation I have seen now refers to docker compose as standard. See https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command for more.
I think should use the command docker-compose top, could check the result, It shoul not be empty when the container is running.
If the containers is stop or exit or Create, it should return empty
What I do to debug small issues is to run:
docker-compose up {service_name}
This way I get to see the output for an individual service. If the service has a dependency you can always start multiple services like so:
docker-compose up {service_name1} {service_name2}
Additionally I use:
docker-compose logs -f -t {service_name1}
To see the logs of an already running service or alternatively:
docker logs -t -f {container_name}
Notice that the command above needs the container name and not the service name
This way you can make sure service by service that everything works as expected and then you can launch them all in detached mode as suggested in the other answers
If you need a programmatic way with bash, this is the fastest implementation:
sleep 2 seconds
check the container was up several seconds ago => Means you've just successfully deployed it
docker ps will look like:
a6f088b1567e lc_fe_isr-app "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 0.0.0.0:10001->3000/tcp lc_fe_isr-app-1
#!/bin/bash
#
# Check if the a single container was started successfully
#
CONTAINER_NAME="lc_fe_isr-app-1"
sleep 2
docker ps | grep $CONTAINER_NAME
UP_SECONDS_AGO=`docker ps | grep $CONTAINER_NAME | grep ' seconds'`
echo $UP_SECONDS_AGO
if [ -n "$UP_SECONDS_AGO" ]
then
echo "Deploy successfully"
else
echo "Deploy FAILED"
exit 1
fi

Ansible with Github: Permission denied (Publickey)

I'm trying to understand the GitHub ssh configuration with Ansible (I'm working on the Ansible: Up & Running book). I'm running into two issues.
Permission denied (publickey) -
When I first ran the ansible-playbook mezzanine.yml playbook, I got a permission denied:
failed: [web] => {"cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
msg: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
FATAL: all hosts have already failed -- aborting
Ok, fair enough, I see several people have had this problem. So I jumped to appendix A on running Git with SSH and it said to run the ssh-agent and add the id_rsa public key:
eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa
Output: Identity AddedI ran ssh-agent -l to check and got the long string: 2048 e3:fb:... But I got the same output. So I checked the Github docs on ssh key generations and troubleshooting which recommended updating the ssh config file on my host machine:
Host github.com
User git
Port 22
Hostname github.com
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive yes
IdentitiesOnly yes
But this still provides the same error. So at this point, I start thinking it's my rsa file, which leads me to my second problem.
Key Generation Issues - I tried to generate an additional cert to use, because the Github test threw another "Permission denied (publickey)" error.
Warning: Permanently added the RSA host key for IP address '192.30.252.131' to the list of known hosts.
Permission denied (publickey).
I followed the Github instructions from scratch and generated a new key with a different name.
ssh-keygen -t rsa -b 4096 -C "me#example.com"
I didn't enter a passphrase and saved it to the .ssh folder with the name git_rsa.pub. I ran the same test and got the following:
$ ssh -i ~/.ssh/git_rsa.pub -T git#github.com
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0644 for '/Users/antonioalaniz1/.ssh/git_rsa.pub' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: ~/.ssh/github_rsa.pub
Permission denied (publickey).
I checked on the permissions and did a chmod 700 on the file and I still get Permission denied (publickey). I even attempted to enter the key into my Github account, but first got a message that the key file needs to start with ssh-rsa. So I started researching and hacking. Started with just entering the long string in the file (it started with --BEGIN PRIVATE KEY--, but I omitted that part after it failed); however, Github's not accepting it, saying it's invalid.
This is my Ansible command in the YAML file:
- name: check out the repository on the host
git: repo={{ repo_url }} dest={{ proj_path }} accept_hostkey=yes
vars:
repo_url: git#github.com:lorin/mezzanine-example.git
This is my ansible.cfg file with ForwardAgent configured:
[defaults]
hostfile = hosts
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes
The box is an Ubuntu Trusty64 using Mac OS. If anyone could clue me into the file permissions and/or Github key generation, I would appreciate it.
I suspect the key permissions issue is because you are passing the public key instead of the private key as the arugment to "ssh -i". Try this instead:
ssh -i ~/.ssh/git_rsa -T git#github.com
(Note that it's git_rsa and not git_rsa.pub).
If that works, then make sure it's in your ssh-agent. To add:
ssh-add ~/.ssh/git_rsa
To verify:
ssh-add -l
Then check that Ansible respects agent forwarding by doing:
ansible web -a "ssh-add -l"
Finally, check that you can reach GitHub via ssh by doing:
ansible web -a "ssh -T git#github.com"
You should see something like:
web | FAILED | rc=1 >>
Hi lorin! You've successfully authenticated, but GitHub does not provide shell access.
I had the same problem, it took me some time, but I have found the solution.
The problem is the URL is incorrect.
Just try to change it to:
repo_url: git://github.com/lorin/mezzanine-example.git
I ran into this issue and discovered it by turning verbosity up on the ansible commands (very very useful for debugging).
Unfortunately, ssh often throws error messages that don't quite lead you in the right direction (aka permission denied is very generic...though to be fair that is often thrown when there is a file permission issue so perhaps not quite so generic). Anyways, running the ansible test command with verbose on helps recreate the issue as well as verify when it is solved.
ansible -vvv all -a "ssh -T git#github.com"
Again, the setup I use (and a typical one) is to load your ssh key into the agent on the control machine and enable forwarding.
steps are found here Github's helpful ssh docs
it also stuck out to me that when I ssh'd to the box itself via the vagrant command and ran the test, it succeeded. So I had narrowed it down to how ansible was forwarding the connection. For me what eventually worked was setting
[paramiko_connection]
record_host_keys = False
In addition to the other config that controls host keys verification
host_key_checking = False
which essentially adds
-o StrictHostKeyChecking=no
to the ssh args for you, and
-o UserKnownHostsFile=/dev/null
was added to the ssh args as well
found here:
Ansible issue 9442
Again, this was on vagrant VMs, more careful consideration around host key verification should be taken on actual servers.
Hope this helps

PostgreSQL on MacOSX

I just wanted to install a PostgreSQL Database. After 3 hours of trying I do not know what else to do. My last try included installing PostgreSQL via Homebrew -> Works perfectly fine.
But typing this:
which psql
I got this: /usr/local/bin/psql
From my view this sort of Path is wrong, a I saw a different one in most tutorials. But I have no idea what to do.
But I went on trying:
createuser -U postgres yrkIO -P
And the terminal asked me for a password only to give me this:
createuser: could not connect to database postgres: FATAL: role "postgres" does not exist
What can I do, I just want to run a PostgreSQL on my Python Flask App?
Have you tried it without forcing a password?
createuser -s -r postgres
This worked for me.
Also, remember to start a server
Start:
pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
Stop server:
pg_ctl -D /usr/local/var/postgres stop -s -m fast

how can i use python to deploy proxies from the command line

Deployment by using Python throws error:
I used Python code ( its your deploy.py) to deploy our proxy (our company proxy) into apigee platform. i read http://apigee.com/docs/api-services/content/deploying-proxies-command-line
but it throws error when i run "python api-platform-samples-master/tools/deploy.py -n apikey -u "yusuf.karatoprak#mobgen.com:Welcome#2014" -o yusufkaratoprak123 -e test -p / -d sample-proxies"
i would like to solve this situation. i added to python code it is not working. it throws me Error: name 'ZipFile' is not defined
The -d flag value needs to point to the directory that contains the /apiproxy directory for the sample you want to deploy. (In your command above, it appears that you are pointing at /sample-proxies, rather than, for example, /sample-proxies/apikey
Try using the deploy scripts. There is one in each sample proxy directory. There's a also a script, /setup/deploy_all.sh if you want to deploy all sample proxies.
Make sure you update /setup/setenv.sh before running the deploy scripts.
The error is in how you are calling it from the command line. You have a space in one of the parameters you pass in, which needs to be put inside of quotes. Turn -u yusuf karatoprak:123 into -u "yusuf karatoprak:123"
Fixed command line call:
python api-platform-samples-master/tools/deploy.py -n weatherapi -u "yusuf karatoprak:123" -o yk123 -e test -p / -d simpleProxy

Categories

Resources