disk I/O error when using Sqlite3 and SqlAlchemy in docker - python

I have a flask app that creates a sqlite db to load fixtures for tests. When I run pytest within osx, there are no issues. However, when I set 'PRAGMA journal_mode=WAL' within a ubuntu 14.04 docker container, I get this:
disk I/O error
Traceback (most recent call last):
File "/tmp/my_app/util/sqlalchemy_helpers.py", line 23, in pragma_journalmode_wal
cursor.execute('PRAGMA journal_mode=WAL')
OperationalError: disk I/O error
The sqlite db file is written to a folder within tmp that is dynamically created using python's "tempfile.mkdtemp" function. Even though the tests run as root (because docker), I still made sure the folder has full read/write/execute permissions. I verified that there is plenty of space left on /tmp. I have test code that creates, modifies, and deletes a file in the database folder, and it passes successfully.
I cannot seem to find a way to get an error code or better explanation as to what failed. Any ideas how I can better debug the issue? Could there be an issue with the docker container?

I had similar problem right now, when recreating sqlite3 database:
Removed database.sqlite3
Created database.sqlite3
Setup right permissions.
Error ocurred.
After some titme figured out that I have also database.sqlite3-shm and database.sqlite3-wal
Removed database.sqlite3-shm and database.sqlite3-wal
And everything goes back to normal.

Related

Airflow task fails with segmentation fault

I'm trying to execute this jar file https://github.com/RMLio/rmlmapper-java from Airflow, but for some reason it is failing straight away. I'm using a PythonOperator to execute some python code, and inside it I have a subprocess call to the java command.
Test command is:
java -jar /root/airflow/dags/rmlmapper-6.0.0-r363-all.jar -v
I'm running Airflow inside a Docker container. The weird thing is that if I execute the exact same command inside the container it works fine.
I tried a bit of everything but the result is always the same: SegFault 139
The memory of the container seems to be fine so it shouldn't be directly related to some OOM issue. I also tried to reset default memory in the Docker compose file with no success.
My suggestion is that the java application somehow tries to load some files which are stored locally inside the jar file, but for some reason maybe Airflow changes the 'user.dir' directory and therefore it is not able to find them and it fails.
I'm really out of ideas so any help will be highly appreciated. Thank you.

AWS EB "None of the instances are sending data."

had a problem when I was trying to deploy my django app with EB. I got something like this:
Instance has not sent any data since launch
each time I change options etc. The AWS in refers me to check my eb-engine.log file in which there is one error line:
[ERROR] An error occurred during execution of command [app-deploy] - [StageApplication]. Stop running the command. Error: staging application failed due to invalid zip file `
Moreover I consistently see in my 'Health section' app's environment 'No data'.
I've uploaded file with .zip extension as it is written. I checked my configuration file before as well as tried to change instance (to have more memory) and nothing worked. I'm pretty new in AWS and don't have really a clue how I can deal with that.

pycharm adds garbage to remote path, causing file not found error when using remote interpreter

Hello and happy new year. I was able to a remote host with pycharm and I see and am able to open files from a linux machine. Additionally, I was able to set up a remote interpreter which I can use on local files. The problem is when I use that interpreter with remote files, I keep getting a file does not exist error.
When I bring up the run configuration, the script path appears to have garbage which I don't know where it's coming from. I see the same garbage in the script path when I run the script.
Anyone have any ideas what could be causing the issue?
I'm having the exact same problem. I guess one have to have the files in a local copy. What you call 'garbage' I think it's a kind of hash for a temporary copy of the file which you have locally anyways.. So just use local files, and update the remote location automatically
UPDATE
I made it work selecting the right configuration beside the run button in PyCharm. The IDE was generating additional configurations and I had your garbage results. Once I selected my manually made original configuration, all worked fine.
Change your working directory in Configurations

SQLAlchemy + SQLite Locking in IPython Notebook

I'm getting an OperationalError: (OperationalError) database is locked error when connection via SQLAlchemy in an IPython notebook instance and I'm not sure why.
I've written a Python interface to a SQLite database using SQLAlchemy and the Declarative Base syntax. I import the database models into an IPython notebook to explore the data. This worked just fine this morning. Here is the code:
from psf_database_interface import session, PSFTable
query = session.query(PSFTable).first()
But this afternoon after I closed my laptop with IPython running (it restarts the server just fine) I started getting this error. It's strange because I can still open the database from the SQLite3 command line tool and query data. I don't expect any other processes to be connecting to this database and running fuser on the database confirms this. My application is not using any concurrent processes (in the code I've written, IDK if something is buried in SQLAlchemy or IPython), and even if it were I'm just doing a read operation, which SQLite does support concurrently.
I've tried restarting the IPython kernel as well as killing and restarting the IPython notebook server. I've tried creating a backup of the database and replacing the database with the backup as suggested here: https://stackoverflow.com/a/2741015/1216837. Lastly, out of desperation, I tried adding the following to see if I could clean out something stuck in the session somehow:
print session.is_active
session.flush()
session.close()
session.close_all()
print session.is_active
Which returns True and True. Any ideas?
Update: I can run the code snippet that is causing errors from a python file without any issues, the issue only occurs in IPython.
I faced the same problem. I can run python scripts but the IPython raise the below exception.
You need to check with fuser there is no process which is using this. But if you cannot find anything and your history of commands are not important to you, you can use the following workaround.
When I deleted the /home/my_user/.ipython/profile_default/history.sqlite file, I can start the IPython. The history is empty as I mentioned above.
$ ipython
[TerminalIPythonApp] ERROR | Failed to create history session in /home/my_user/.ipython/profile_default/history.sqlite. History will not be saved.
Traceback (most recent call last):
File "/home/esadrfa/libs/anaconda3/lib/python3.6/site-packages/IPython/core/history.py", line 543, in __init__
self.new_session()
File "<decorator-gen-22>", line 2, in new_session
File "/home/esadrfa/libs/anaconda3/lib/python3.6/site-packages/IPython/core/history.py", line 58, in needs_sqlite
return f(self, *a, **kw)
File "/home/esadrfa/libs/anaconda3/lib/python3.6/site-packages/IPython/core/history.py", line 570, in new_session
self.session_number = cur.lastrowid
sqlite3.OperationalError: database is locked
[TerminalIPythonApp] ERROR | Failed to open SQLite history :memory: (database is locked).

Issues migrating sqlite3 database to different version?

I have been tasked with migrating a python web application to another Linux server. Frustratingly, the entire database is sqlite3. I have moved all related code and database files to the new server and set up the environment. Python does not seem to be able to open the database files as I get this message when running the app:
OperationalError: unable to open database file
I have checked the following:
All paths are correct, the database connection is made.
Read/Write permission is open to all users on the files for testing
One difference between the servers is, the old server has sqlite 3.5.6 and the new one has 3.6.20. Would there be file compatibility issues here? If so, is there a way to convert the database files to be compatible? Is there another problem I may be overlooking?
The error message
OperationalError: unable to open database file
may occur if the directory containing the database file is not writable.
To make the directory writeable for $USER:
chmod o+w /path/to/dir
chown $USER /path/to/dir

Categories

Resources