I'm not able anymore to change my database on arangodb.
If I try to create a collection I get the error:
Collection error: cannot create collection: invalid database directory
If I try to delete a collection I get the error:
Couldn't delete collection.
Besides that some of the collections are now corrupted.
I've been working with this db for 2 months and I'm only getting these errors now.
Thanks.
If anyone gets the same error anytime in life, it was just a temporary error due to server overload.
Related
I'm trying to connect to my MongoDB and updating a document.
We're using a replica server as a seed and then we want to write a collection (specifically, update a document).
No matter what I do, every time I try to update the given document, I get the following error: NotMasterError: not master, full error: {'ok': 0.0, 'errmsg': 'not master', 'code': 10107, 'codeName': 'NotMaster'}.
I've tried changing the read pereference to Primary, changing the write concern to w: 1 but nothing seems to work.
When I debug, I can see that the client discovered all the machines in the network, including the actual master.
With a Mongo library in another language (Reactivemongo in Scala), this is done automatically but seems that with PyMongo I'm struggling. How can I ensure that the update gets forwarded to a Primary node?
If anybody can help, that'd be great :)
Read preference applies to reads. It has no effect on writes. All writes must be sent to the primary.
You should be connecting to replica set (also known as "discovering the topology") instead of using a direct connection, and then specifying read preference for secondary reads.
So thanks to #D. SM answer, I ensured that when I init the MongoClient, I connect to the specific replicaset by adding the keyword param:
client = MongoClient(uri, replicaset='my-replica-set-name').
To find out what the replica set name is (if you don't know it), you can look at your server status and go down to the conf key repl.setName.
Thanks again :)
I'm running a python db migration script (Flask-Migrate) and have added the alembic.ddl.imp import DefaultImpl to get around the first set of errors but now I'm getting the following. I'm trying to use this script to set up my tables and database in snowflake. What am I missing? Everything seems to be working and I can't seem to find any help on this particular error in the snowflake documentation. I would assume that the snowflake sqlalchemy connector would address the creation of a unique index.
The script so far does create several of the tables, but when it gets to this part it throws the error.
> sqlalchemy.exc.ProgrammingError:
> (snowflake.connector.errors.ProgrammingError) 001003 (42000): SQL
> compilation error: syntax error line 1 at position 7 unexpected
> 'UNIQUE'. [SQL: CREATE UNIQUE INDEX ix_flicket_users_token ON
> flicket_users (token)] (Background on this error at:
> http://sqlalche.me/e/f405)
Snowflake does not have INDEX objects, so any CREATE ... INDEX statement will fail.
With Snowflake, you have to trust the database to organize your data with micro partitions and build a good access plan for your queries.
You will feel uneasy at first, but eventually stop worrying.
Bleeding edge solutions will require monitoring/tuning performance using the query log, however.
Nothing new here.
I'm not able to access data of TraCI package which should apparently have instances of all Domain classes.I tried creating those classes manually but it caused some connection problems.
I was running the simulation when an error occurred
'NoneType' object has no attribute '_sendReadOneStringCmd'
for the code
result = self._connection._sendReadOneStringCmd
On investigating further i found that self._connection is being set to None though initials it was set to a Connection type object.
I think this is because I initialized TrafficLaneDomain() LaneAreaDomain() and similar other classes in my code.
The Documentation attached shows the instances present ,but i'm unable to acces it.
Is this the error or anything else might be wrong?
TraCI DOC Home
I have stored the backup of my database as an sql.gz,but i cannot restore it, i tried restoring it with "restore database" option by odoo9 UI and it gave me this error. i even tried restoring a dump.sql file,but same error.
Error:
Database restore error: Postgres subprocess
('/usr/bin/pg_restore', u'--dbname=Backedup',
'--no-owner', '/tmp/tmpay5e1D') error 1
Sometimes a restore is more easily achieved using the psql command. If the contents of your sql file seem to be ok, you might try loading it as a simple file containing SQL commands. See documentation in https://www.postgresql.org/docs/9.1/static/backup-dump.html#BACKUP-DUMP-RESTORE for example.
I am using Pyramid, SQLalchemy (with ZopeTransactionExtension), Postgresql and uwsgi and I have a problem with my web app.
So when I save object using DBSession.add(object) and flush DBSession.flush() I get no error and I even get the id of the newly created object, but when the page reloads I see success message but the object does not appear in the list of all objects. (No error was thrown. In PostgreSQL I can see my insert query and even COMMIT right after it)
This can be seen in API. When I create the new object it returns me it's id, 2 seconds later I delete this object and I get the error that object with this id does not exist. The problem occurs randomly (sometimes more often and sometimes less often) only on production/testing environment (not local) and disappears after rebuilding (It comes back after 1-2 days of uptime).
Had someone a similar problem?
Check out this answer. Generally, everything that is flushed will be temporarily stored into the database and will be persisted when a .commit() happens.