I'm trying to connect to my MongoDB and updating a document.
We're using a replica server as a seed and then we want to write a collection (specifically, update a document).
No matter what I do, every time I try to update the given document, I get the following error: NotMasterError: not master, full error: {'ok': 0.0, 'errmsg': 'not master', 'code': 10107, 'codeName': 'NotMaster'}.
I've tried changing the read pereference to Primary, changing the write concern to w: 1 but nothing seems to work.
When I debug, I can see that the client discovered all the machines in the network, including the actual master.
With a Mongo library in another language (Reactivemongo in Scala), this is done automatically but seems that with PyMongo I'm struggling. How can I ensure that the update gets forwarded to a Primary node?
If anybody can help, that'd be great :)
Read preference applies to reads. It has no effect on writes. All writes must be sent to the primary.
You should be connecting to replica set (also known as "discovering the topology") instead of using a direct connection, and then specifying read preference for secondary reads.
So thanks to #D. SM answer, I ensured that when I init the MongoClient, I connect to the specific replicaset by adding the keyword param:
client = MongoClient(uri, replicaset='my-replica-set-name').
To find out what the replica set name is (if you don't know it), you can look at your server status and go down to the conf key repl.setName.
Thanks again :)
Related
I wrote a program that sends the following
to App: 8=FIX.4.4|9=156|35=V|34=2|49=id|52=sometime|56=id1|146=1|55=EURUSD|460=4|167=FOR|262=1|263=1|264=1|265=0|267=2|269=0|269=1|10=114|
I receive this. I get the bid and the offer as expected:
from App 8=FIX.4.4|9=217|35=W|34=4|49=id1|52=sometime|56=id|42=sometime1|55=EURUSD|262=1|268=2|269=0|270=1.12438|271=50000|269=1|270=1.12442|271=50000|10094=sometime2|10=002|
But as I request snapshot + update on full refresh, it sends back the following;
to App: 8=FIX.4.4|9=118|35=j|34=3|49=id|52=sometime|56=id1|45=2|58=Conditionally Required Field Missing (299)|372=W|380=5|10=210|
Data Dictionary of my broker is the following: DataDictionary
UseDataDictionary=Y
ValidateUserDefinedFields=N # tried with Y, same
DataDictionary=C:\Users\Documents\FIX44.xml
Any idea of what I did wrong please?
Thank you folks!
Check your counterparty's documentation for what fields they expect you to send in the MarketDataRequest (35=V) message.
In the default DataDictionary, QuoteEntryID (tag 299) doesn't belong to MarketDataRequest or in any of the repeating groups it contains. This means that your counterparty has made a DD customization and added it somewhere.
So your main mistake is that you are not looking at your counterparty's docs, and your local DD is not in sync with theirs. That latter part is not burning you here in this question, but it will burn you later. Get your DD in sync!
Back to this issue: Sure, you're adding QuoteEntryID to the message, but you're adding it to the top-level of the message body, and your counterparty probably isn't looking for it there. If you look again at the default DataDictionary, QuoteEntryID always belong to a group, so your counterparty probably wants it in in a group also. You just need to read their docs to find out which group it is.
TLDR: Counterparties always customize the DataDictionary -- always read your counterparty's docs!
I dont have much knowledge in dbs, but wanted to know if there is any technique by which when i update or insert a specific entry in a table, it should notify my python application to which i can then listen whats updated and then update that particular row, in the data stored in session or some temporary storage.
I need to send data filter and sort calls again n again, so i dont want to fetch whole data from sql, so i decided to keep it local, nd process it from there. But i was worried if in the mean time the db updates, and i could have been passing the same old data to filter requests.
Any suggestions?
rdbs only will be updated by your program's method or function sort of things.
you can just print console or log inside of yours.
if you want to track what updated modified deleted things,
you have to build a another program to able to track the logs for rdbs
thanks.
The project I'm working in is still using google-api-python-client which is deprecated and the official documentation has no examples for it. I've gotten BigQuery working with it but I can't seem to figure out how to set configuration properties, specifically so that I can run a query with BATCH priority.
Can anyone point me in the right direction?
The answer is to use jobs().insert() rather than jobs().query(). Inserting a new job asynchronously gives the caller the ability to specify a wide range of options but requires them to run another command to get the results.
So assuming gs is your authenticated service object:
# insert an asynchronous job
jobr = gs.jobs().insert(projectId='abc-123', body={'configuration':{'query':{'query':'SELECT COUNT(*) FROM schema.table'}}}).execute()
# get query results of job
gs.jobs().getQueryResults(projectId='abc-123', jobId=jobr['jobReference']['jobId']).execute()
i have written MicroServices like for auth, location, etc.
All of microservices have different database, with for eg location is there in all my databases for these services.When in any of my project i need a location of user, it first looks in cache, if not found it hits the database. So far so good.Now when location is changed in any of my different databases, i need to update it in other databases as well as update my cache.
currently i made a model (called subscription) with url as its field, whenever a location is changed in any database, an object is created of this subscription. A periodic task is running which checks for subscription model, when it finds such objects it hits api of other services and updates location and updates the cache.
I am wondering if there is any better way to do this?
I am wondering if there is any better way to do this?
"better" is entirely subjective. if it meets your needs, it's fine.
something to consider, though: don't store the same information in more than one place.
if you need an address, look it up from the service that provides address, every time.
this may be a performance hit, but it eliminates the problem of replicating the data everywhere.
another option would be a more proactive approach, as suggested in comments.
instead of creating a task list for changes, and doing that periodically, send a message across rabbitmq immediately when the change happens. let every service that needs to know, get a copy of the message and update it's own cache of info.
just remember, though. every time you have more than one copy of the information, you reduce the "correctness" of the system, as a whole. it will always be possible for the information found in one of your apps to be out of date, because it did not get an update from the official source.
I am trying to interact with a DynamoDB table from python using boto. I want all reads/writes to be quorum consistency to ensure that reads sent out immediately after writes always reflect the correct data.
NOTE: my table is set up with "phone_number" as the hash key and first_name+last_name as a secondary index. And for the purposes of this question one (and only one) item exists in the db (first_name="Paranoid", last_name="Android", phone_number="42")
The following code works as expected:
customer = customers.get_item(phone_number="42")
While this statement:
customer = customers.get_item(phone_number="42", consistent_read=True)
fails with the following error:
boto.dynamodb2.exceptions.ValidationException: ValidationException: 400 Bad Request
{u'message': u'The provided key element does not match the schema', u'__type': u'com.amazon.coral.validate#ValidationException'}
Could this be the result of some hidden data corruption due to failed requests in the past? (for example two concurrent and different writes executed at eventual consistency)
Thanks in advance.
It looks like you are calling the get_item method so the issue is with how you are passing parameters.
get_item(hash_key, range_key=None, attributes_to_get=None, consistent_read=False, item_class=<class 'boto.dynamodb.item.Item'>)
Which would mean you should be calling the API like:
customer = customers.get_item(hash_key="42", consistent_read=True)
I'm not sure why the original call you were making was working.
To address your concerns about data corruption and eventual consistency, it is highly unlike that any API call you could make to DynamoDB could result in it getting into a bad state outside of you sending it bad data for an item. DynamoDB is a highly tested solution that provides exceptional availability and goes to extraordinary lengths to take care of the data you send it.
Eventual consistency is something to be aware of with DynamoDB, but generally speaking it is not something that causes many issues depending on the specifics of the use case. While AWS does not provide specific metrics on what "eventually consistent" look like, in day-to-day use it is normal to be able to read out records that were just written/modified under a second even when eventually consistent reads.
As for performing multiple writes simultaneously on the same object, DynamoDB writes are always strongly consistent. You can utilize conditional writes with DynamoDB if you are worried about an individual item being modified at the same time resulting in unexpected behavior which will allow writes to fail and your application logic to deal with any issues that arise.