Is there a way to configure Pyramid so that when MongoDB fails over to a secondary replica, Pyramid starts using it?
Pyramid should be using the official python MongoDB drivers. The drivers are configured to do this "automatically", but they need the correct connection string.
See here for the connection strings.
One thing to keep in mind, the definition of "automatic fail-over" is not clear cut.
If you create a new connection to the DB that connection will point at the current primary.
If you use an existing connection from a pool, that connection may be pointing at the wrong server. In this case it will throw an exception the first time and should connect to the correct server the second time.
However, when a fail-over happens, there is a brief window where there is no primary (typically 2-10 seconds). If you use a connection during this period, no connection will be primary.
Note that this is not specific to python, it's the way Replica Sets function.
Related
I am fairly new to MongoDB, and I wondering how can I establish multiple connections to a single Mongo instance without specifying ports or making a new config file for each user. I am running the Mongo instance in a singularity container on a remote server.
Here is my sample config file:
# mongod.conf
# for documentation of all options, see:
# https://docs.mongodb.com/manual/reference/configuration-options/
# where to write logging data for debugging and such.
systemLog:
destination: file
logAppend: true
path: /path-to-log/
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
maxIncomingConnections: 65536
#security
security:
authorization: 'enabled'
Do I need to use replica set? If so, can someone explain the concept behind a replica set?
Do I need to change my config file? If so, what changes do I need to make to allow for multiple connections?
Here is my code that I use to connect to the server (leaving out import statements for clarity):
PWD = "/path-to-singularity-container/"
os.chdir(PWD)
self.p = subprocess.Popen(f"singularity run --bind {PWD}/data:/data/db mongo.sif --auth --config {PWD}/mongod.conf", shell=True, preexec_fn=os.setpgrp)
connection_string = "mongodb://user:password#127.0.0.1:27017/"
client = pymongo.MongoClient(connection_string, serverSelectionTimeoutMS=60_000)
EDIT: I am trying to have multiple people connect to MongoDB using pymongo at the same time given the same connection string. I am not sure how I can achieve this without giving each user a separate config. file.
Thank you for your help!
you can enough value of ulimit. Mongod tracks each incoming connection with a file descriptor and a thread.
you can go through the below link which will explain each component of ulimit parameters and their values.
https://docs.mongodb.com/manual/reference/ulimit/
For HA solutions, if you don't want downtime in your environment then you need to go with HA solution which means 3 nodes replica set which can afford one node down at a time.
If the primary node goes down, there will be internal voting and the new node will promote as new primary within seconds. So your application will be less impacted. Another benefit, if your node got crashed, you have another copy of data.
Hope this will answer your question.
No special work is required, you simply create a client and execute queries.
I want some clarification on how the pre ping feature exactly works with SqlAlchemy db pools. Let's say I try to make a SQL query to my database with the db pool. If the db pool sends a pre ping to check the connection and the connection is broken, does it automatically handle this? By handling I mean that it reconnects and then sends the SQL query? Or do I have to handle this myself in my code?
Thanks!
From the docs, yes stale connections are handled transparently:
The calling application does not need to be concerned about organizing operations to be able to recover from stale connections checked out from the pool.
... unless:
If the database is still not available when “pre ping” runs, then the
initial connect will fail and the error for failure to connect will be
propagated normally. In the uncommon situation that the database is
available for connections, but is not able to respond to a “ping”, the
“pre_ping” will try up to three times before giving up, propagating
the database error last received.
I am using SolrClient for python with Solr 6.6.2. It works as expected but I cannot find anything in the documentation for closing the connection after opening it.
def getdocbyid(docidlist):
for id in docidlist:
solr = SolrClient('http://localhost:8983/solr', auth=("solradmin", "Admin098"))
doc = solr.get('Collection_Test',doc_id=id)
print(doc)
I do not know if the client closes it automatically or not. If it doesn't, wouldn't it be a problem if several connections are left open? I just want to know if it there is any way to close the connection. Here is the link to the documentation:
https://solrclient.readthedocs.io/en/latest/
The connections are not kept around indefinitely. The standard timeout for any persistent http connection in Jetty is five seconds as far as I remember, so you do not have to worry about the number of connections being kept alive exploding.
The Jetty server will also just drop the connection if required, as it's not required to keep it around as a guarantee for the client. solrclient uses a requests session internally, so it should do pipelining for subsequent queries. If you run into issues with this you can keep a set of clients available as a pool in your application instead, then request an available client instead of creating a new one each time.
I'm however pretty sure you won't run into any issues with the default settings.
I'm having trouble with MySQL timing out and going away after 8 hours. I am using google app engine as a host. My Python script uses the Tornado framework.
Right now I instantiate my MySQL db connection before any functions right at the top of the main server script. Once I deploy that, the clock starts ticking and 8 hours or so later, MySQL will go away and I will have to deploy my script again.
I haven't been using db.close() at all because I hear that restarting the database connection takes a long time. Is this true? Or is there a proper way to use db.close()?
One of my friends suggested I try getting the database instance and then closing it after each function.. is that recommended and where might I find some tutorials on that?
I'm mostly looking for resources here, but if someone wants to lay it out for me that would be awesome.
Thank you all in advance.
The connection is going away because of the wait_timeout session variable which
is the number of seconds the server waits for activity on a noninteractive connection
before closing it.
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout
Good way is to close the connection each time and create a new one if you are not reusing the same connection so frequently, otherwise you can increase the value of wait_timeout
Establishing a connection to a MySQL database should be quite fast and it is certainly good practice to keep the connection open only for as long as you need it.
I am not certain why your connection should be non-responsive for 8 hours - have you tried checking your settings?
The correct command in Python is connection.close().
I want to do the following:
Have a software running written in Python 2.7
This software connects to a database (Currently a MySQL database)
This software listen for connections on a port X on TCP
When a connection is established, a client x request or command something, then the software use the database to store, remove or fetch information (Based on the request or command).
What I currently have in head is the classic approach of connecting to the database, store the connection to the database in an object (as a variable) that is passed in the threads that are spawned by the connection listener, then these threads use the variable in the object to do what they need to do with the database connection. (I know that multi-processing is better then multi-threading in Python, but it's not related to my question at this time)
Now my question, how should I use SQLAlchemy in this context? I am quite confused even although I have been reading quite a lot of documentation about it and there doesn't seem to be "good" examples on how to handle this kind of situation specifically even although I have been searching quite a lot.
What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need?