Restoring database with .sql.gz file in odoo9 shows error - python

I have stored the backup of my database as an sql.gz,but i cannot restore it, i tried restoring it with "restore database" option by odoo9 UI and it gave me this error. i even tried restoring a dump.sql file,but same error.
Error:
Database restore error: Postgres subprocess
('/usr/bin/pg_restore', u'--dbname=Backedup',
'--no-owner', '/tmp/tmpay5e1D') error 1

Sometimes a restore is more easily achieved using the psql command. If the contents of your sql file seem to be ok, you might try loading it as a simple file containing SQL commands. See documentation in https://www.postgresql.org/docs/9.1/static/backup-dump.html#BACKUP-DUMP-RESTORE for example.

Related

How do I set up Alembic for a SQLite database attached as a schema?

I've tried many contortions on this problem to try to figure out what's going on.
My SQLAlchemy code specified tables as schema.table. I have a special connection object that connects using the specified connect string if the database is PostgreSQL or Oracle, but if the database is SQLite, it connects to a :memory: database, then attaches the SQLite file-based database using the schema name. This allows me to use schema names throughout my SQLAlchemy code without a problem.
But when I try to set up Alembic to see my database, it fails completely. What am I doing wrong?
I ran into several issues that had to be worked through before I got this working.
Initially, Alembic didn't see my database at all. If I tried to specify it in the alembic.ini file, it would load the SQLite database using the default schema, but my model code specified a schema, so that didn't work. I had to change alembic/env.py in run_migrations_online() to call my connection method from my code instead of using engine_from_config. In my case, I created a database object that had a connect() method that would return the engine and the metadata. I called that as connectable, meta = db.connect(). I would return the schema name with schema=db.schema(). I had to import the db class from my SQLAlchemy code to get access to these.
Now I was getting a migration that would build up the entire database from scratch, but I couldn't run that migration because my database already had those changes. So apparently Alembic wasn't seeing my database. Alembic also kept telling me that my database was out of date. The problem there was that the alembic table alembic_version was being written to my :memory: database, and as soon as the connection was dropped, so was the database. So to get Alembic to remember the migration, I needed that table to be created in my database. I added more code to env.py to pass the schema to context.configure using the version_table_schema=my_schema.
When I went to generate the migration again, I still got the migration that would build the database from scratch, so Alembic STILL wasn't seeing my database. After lots more Googling, I found that I needed to pass include_schemas=True to context.configure in env.py. But after I added that, I started getting tracebacks from Alembic.
Fortunately, my configuration was set up to provide both the connection and the metadata. By changing the target_metadata=target_metadata line to target_metadata=meta (my local metadata returned from the connection), I got around these tracebacks as well, and Alembic started to behave properly.
So to recap, to get Alembic working with a SQLite database attached as a schema name, I had to import the connection script I use for my Flask code. That connection script properly attaches the SQLite database, then reflects the metadata. It returns both the engine and the metadata. I return the engine to the "connectable" variable in env.py, and return the metadata to the new local variable meta. I also return the schema name to the local variable schema.
In the with connectable.connect() as connection: block, I then pass to context.configure additional arguments target_metadata=meta, version_table_schema=schema, and include_schemas=True where meta and schema are my new local variables set above.
With all of these changes, I thought I was able to work with SQLite databases attached as schemas. Unfortunately, I continued to run into problems with this, and eventually decided that I simply wouldn't work with SQLite with Alembic. Our rule now is that Alembic migrations are only for non-SQLite databases, and SQLite data has to be migrated to another database before attempting an Alembic migration of the data.
I'm documenting this so that anyone else facing this may be able to follow what I've done and possibly get Alembic working for SQLite.

How do I enable local infile server side?

Firstly, I am new to Stack Overflow so appreciate any suggestions on how to improve my question asking in addition to any potential solutions.
I am using MySQL and Python (with Pycharm IDE) and I need to load data from a csv file into a table in a database I already created using MySQL Workbench (I am not using the Workbench import tool because it's too slow and I will need to do this from Pycharm in the future anyway).
I can connect to my database successfully from PyCharm and view the tables etc. I am then using the following to try and load the data from my CSV file -
mycursor = db.cursor()
query = "LOAD DATA LOCAL INFILE 'file path' INTO TABLE scores FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (ID, Date, Score)"
mycursor.execute(query)
I received an error when I did this and saw from another question on Stack Overflow that I needed to enable the 'local_infile' setting so I did that (I can see it's turned on).
However, I am still getting the error below when I try to import my data and from reading on Stock Overflow I believe this is because I also need to enable 'local_infile' on the server side but am unsure how to do this?
The error I am now getting when I try to load my data is as follows -
mysql.connector.errors.ProgrammingError:
3948 (42000): Loading local data is disabled; this must be enabled on both the client and server sides

SQLite cursor creates a new .db with the same name as the one I want it to open

I'm trying to use a cursor to open a database and read a table using Python, however I keep getting this error:
sqlite3.OperationalError: no such table: table_name
I can also see that a new .db file is created with the same name as the one I want to open, except that it's empty.
This has worked before on a different database although my code did not close the connection to that database when I ran it the first time.
Could it be that the connection is still open, and that is why I am unable to run the code on a different database? If so, how can I make sure that connection is closed?
If this is not the case, would it be possible for someone to shed some light on this?

Opening sqlite3 database with no write access

I'm trying to perform queries on an SQLite database from Python. Problem is, I do not have write access to the database file, and also not to the directory containing the database file.
When I connect to the database and perform a SELECT query I get an "Unable to open database file" error. I tried following the advice this answer, but it didn't work. I guess SQLite fails when trying to create the lock files.
When I have write access to the directory, but not to the sqlite file, I get another error - a locking error. This is because sqlite creates the shm and wal files with the same permissions as the db file, meaning I get shm and wal files I can't write to, resulting in a locking error.
Other than copying all files to a directory I do have full access to, is there another way around this?
The documentation says:
It is not possible to open read-only WAL databases. The opening process must have write privileges for "-shm" wal-index shared memory file associated with the database, if that file exists, or else write access on the directory containing the database file if the "-shm" file does not exist.
To allow read-only access to that database, some user with write permissions needs to change it to some other journal mode.
You can open a sqlite3 connection in read mode with the following syntax:
con = sqlite3.connect('file:path/to/database.sqlite?mode=ro', uri=True)

sqlite3 insert using python and python cgi

In db.py,I can use a function(func insert) insert data into sqlite correctly.
Now I want to insert data into sqlite through python-fastcgi, in
fastcgi (just named post.py ) I can get the request data correctly,but
when I call db.insert,it gives me internal server error.
I already did chmod 777 slqite.db. Anyone know whats problem?
Ffinally I found the answer:
the sqlite3 library needs write permissions also on the directory that contains it, probably because it needs to create a lockfile.
Therefor when I use sql to insert data there is no problem, but when I do it through web cgi,fastcgi etc)to insert data there would be an error.
Just add write permission to the directory.

Categories

Resources