Quickfix python data dictionary - python

I am connecting to an order session but I get execution reports without ExecID field. I change the requirement to no for ExecID field in ExecutionReport messages from the data dictionary but quickfix still sends reject message. Thanks for any help.

It sounds like what you've changed in the data dictionary makes the ExecID optional rather than mandatory. If you wanted to remove "the requirement" altogether then you'd have to remove the ExecID from the fields making up an execution report in the data dictionary. However, if you did that and your cpty still sent it in the exec report (because it's still configured in their data dictionary) then it would (provided you're using your own DD validation) fail validation.
Why don't you want the ExecID field?
Why can't you ignore it if it's sent to you?

Related

FIX 4.4 MarketDataRequest Conditionally Required Field Missing (299)

I wrote a program that sends the following
to App: 8=FIX.4.4|9=156|35=V|34=2|49=id|52=sometime|56=id1|146=1|55=EURUSD|460=4|167=FOR|262=1|263=1|264=1|265=0|267=2|269=0|269=1|10=114|
I receive this. I get the bid and the offer as expected:
from App 8=FIX.4.4|9=217|35=W|34=4|49=id1|52=sometime|56=id|42=sometime1|55=EURUSD|262=1|268=2|269=0|270=1.12438|271=50000|269=1|270=1.12442|271=50000|10094=sometime2|10=002|
But as I request snapshot + update on full refresh, it sends back the following;
to App: 8=FIX.4.4|9=118|35=j|34=3|49=id|52=sometime|56=id1|45=2|58=Conditionally Required Field Missing (299)|372=W|380=5|10=210|
Data Dictionary of my broker is the following: DataDictionary
UseDataDictionary=Y
ValidateUserDefinedFields=N # tried with Y, same
DataDictionary=C:\Users\Documents\FIX44.xml
Any idea of what I did wrong please?
Thank you folks!
Check your counterparty's documentation for what fields they expect you to send in the MarketDataRequest (35=V) message.
In the default DataDictionary, QuoteEntryID (tag 299) doesn't belong to MarketDataRequest or in any of the repeating groups it contains. This means that your counterparty has made a DD customization and added it somewhere.
So your main mistake is that you are not looking at your counterparty's docs, and your local DD is not in sync with theirs. That latter part is not burning you here in this question, but it will burn you later. Get your DD in sync!
Back to this issue: Sure, you're adding QuoteEntryID to the message, but you're adding it to the top-level of the message body, and your counterparty probably isn't looking for it there. If you look again at the default DataDictionary, QuoteEntryID always belong to a group, so your counterparty probably wants it in in a group also. You just need to read their docs to find out which group it is.
TLDR: Counterparties always customize the DataDictionary -- always read your counterparty's docs!

DynamoDB consistent read results in schema error

I am trying to interact with a DynamoDB table from python using boto. I want all reads/writes to be quorum consistency to ensure that reads sent out immediately after writes always reflect the correct data.
NOTE: my table is set up with "phone_number" as the hash key and first_name+last_name as a secondary index. And for the purposes of this question one (and only one) item exists in the db (first_name="Paranoid", last_name="Android", phone_number="42")
The following code works as expected:
customer = customers.get_item(phone_number="42")
While this statement:
customer = customers.get_item(phone_number="42", consistent_read=True)
fails with the following error:
boto.dynamodb2.exceptions.ValidationException: ValidationException: 400 Bad Request
{u'message': u'The provided key element does not match the schema', u'__type': u'com.amazon.coral.validate#ValidationException'}
Could this be the result of some hidden data corruption due to failed requests in the past? (for example two concurrent and different writes executed at eventual consistency)
Thanks in advance.
It looks like you are calling the get_item method so the issue is with how you are passing parameters.
get_item(hash_key, range_key=None, attributes_to_get=None, consistent_read=False, item_class=<class 'boto.dynamodb.item.Item'>)
Which would mean you should be calling the API like:
customer = customers.get_item(hash_key="42", consistent_read=True)
I'm not sure why the original call you were making was working.
To address your concerns about data corruption and eventual consistency, it is highly unlike that any API call you could make to DynamoDB could result in it getting into a bad state outside of you sending it bad data for an item. DynamoDB is a highly tested solution that provides exceptional availability and goes to extraordinary lengths to take care of the data you send it.
Eventual consistency is something to be aware of with DynamoDB, but generally speaking it is not something that causes many issues depending on the specifics of the use case. While AWS does not provide specific metrics on what "eventually consistent" look like, in day-to-day use it is normal to be able to read out records that were just written/modified under a second even when eventually consistent reads.
As for performing multiple writes simultaneously on the same object, DynamoDB writes are always strongly consistent. You can utilize conditional writes with DynamoDB if you are worried about an individual item being modified at the same time resulting in unexpected behavior which will allow writes to fail and your application logic to deal with any issues that arise.

Using database as a key prefix in redis

I'm evaluating using redis to store some session values. When constructing the redis client (we will be using this python one) I get to pass in the db to use. Is it appropriate to use the DB as a sort of prefix for my keys? E.g. store all session keys in db 0 and some messages in db 1 and so on? Or should I keep all my applications keys in the same db?
Quoting my answer from this question:
It depends on your use case, but my rule of thumb is: If you have a
very large quantity of related data keys that are unrelated to all the
rest of your data in Redis, put them in a new database. Reasons being:
You may need to (non-ideally) use the keys command to get all of that
data at some point, and having the data segregated makes that much
cheaper.
You may want to switch to a second redis server later, and having
related data pre-segregated makes this much easier.
You can keep your databases named somewhere, so it's easier for you,
or a new employee to figure out where to look for particular data.
Conversely, if your data is related to other data, they should always
live in the same database, so you can easily write pipelines and lua
scripts that can access both.

Which one data load method is the best for perfomance?

For example, I have object user stored in database (Redis)
It has several fields:
String nick
String password
String email
List posts
List comments
Set followers
and so on...
In Python programm I have class (User) with same fields for this object. Instances of this class maps to object in database. The question is how to get data from DB for best performance:
Load values for each field on instance creating and initialize fields with it.
Load field value each time on field value requesting.
As second one but after value load replace field property by loaded value.
p.s. redis runs in localhost
The method entirely depends on the requirements.
If there is only one client reading and modifying the properties, this is a rather simple problem. When modifying data, just change the instance attributes in your current Python program and -- at the same time -- keep the DB in sync while keeping your program responsive. To that end, you should outsource blocking calls to another thread or make use of greenlets. If there is only one client, there definitely is no need to fetch a property from the DB on each value lookup.
If there are multiple clients reading the data and only one client modifying the data, you have to think about which level of synchronization you need. If you need 100 % synchronization, you will have to fetch data from the DB on each value lookup.
If there are multiple clients changing the data in the database you better look into a rock-solid industry standard solution rather than writing your own DB cache/mapper.
Your distinction between (2) and (3) does not really make sense. If you fetch data on every lookup, there is no need to 'store' data. You see, if there can be multiple clients involved these things quickly become quite complex and it's really hard to get it right.

Safe using user input as key_name?

I would like to use a string that was input by the user in a web form as part of a key name:
user_input = self.request.POST.get('foo')
if user_input:
foo = db.get_or_insert(db.Key('Foo', user_input[:100], parent=my_parent))
Is this safe? Or should I do some inexpensive encoding or hash? If yes, which one?
It's safe as long as you don't care about a malicious user filling up your database with junk. get_or_insert won't let them overwrite existing entries, just add new ones.
Make sure you limit it's length (both in the UI and after it's been recieved), even if you do no other validation on it, so at least they can't just give you crazy big keys either to fill up the database quickly or to crash your app.
Edit: You just commented that you do, in fact, verify that it's a reasonable key. In that case, yes, it's safe.
Keep in mind that the user can probably still figure out what key are already in your database, based on how long it takes you to respond to what they've provided, and you still need to make sure they're authorized to see whatever content they request, or limit them to a small number of requests to they can't just brute-force retrieve all the information linked to the keys you're generating.

Categories

Resources