I'm running QuickFix with the Python API and connecting to a TT FIX Adapter using FIX4.2
I am logging on and sending a market data request for two instruments. That works fine and data from the instruments comes in as expected. I can get all kinds of information from the messages.
However, I am having trouble getting the Symbol (flag 55) field.
import quickfix as fix
def fromApp(self, message, sessionID):
ID = fix.Symbol()
message.getField(ID)
print ID
This works for the very first message [the initial Market Data Snapshot (flag 35 = W)] that comes to me. Once I start getting incremental refreshes (flag 35 = X), I can no longer get the Symbol field. Every message that arrives results in a Field Not Found error.
This is confusing me because in the logs, the Symbol field is always present, whether the message type is W or X.
Thinking the Symbol is in the header of refresh messages, I tried get.Field(ID) when 35 = W and get.Header().getField(ID) when 35 = X, however this did not work.
Can somebody help me figure out what is going on here? I would like to be able to explicitly tell my computer what instruments it is looking at.
Thanks
Your question is pretty simple, but you've mixed in some misconceptions as well.
1) Symbol will never be in the header. It is a body field.
2) In X messages, the symbol is in a repeating group. You first have to get a group object with msg.GetGroup(), then get the symbol from that. See this example code, from the repeating groups doc page.
3) In W messages, the symbol is not in a group. That's why it works for you there.
It seems clear you are pretty new to QuickFIX and FIX in general. I think you should take few minutes and skim through the "Working with Messages" section of the docs.
Also, the FIXimate website can be your best friend.
Related
I am trying to send a purchase order, but I am not receiving a response from the server. The session is correct, and I am connected to port 5202. The python code is:
mdr = fix.Message()
mdr.getHeader().setField(fix.BeginString(fix.BeginString_FIX44))
mdr.getHeader().setField(fix.MsgType(fix.MsgType_NewOrderSingle))
mdr.getHeader().setField(fix.TargetSubID('TRADE'))
mdr.getHeader().setField(fix.SenderSubID('TRADE'))
mdr.setField(fix.ClOrdID(str(str(self.genExecID())))
mdr.setField(fix.HandlInst('1')))
mdr.setField(fix.Side('1'))
mdr.setField(fix.Symbol('1'))
mdr.setField(fix.OrderQty(0.01))
mdr.setField(fix.Currency('EUR'))
mdr.setField(fix.TimeInForce('1'))
mdr.setField(fix.OrdType('1'))
trstime = fix.TransactTime()
trstime.setString(datetime.utcnow().strftime('%Y%m%d-%H:%M:%S.%f')[:-3])
mdr.setField(trstime)
fix.Session.sendToTarget(mdr, self.sessionID)
And the message it generates is:
8=FIX.4.4☺9=158☺35=D☺34=2☺49=demo.ctrader.3449248☺50=TRADE☺52=20220310-10:37:36.000☺56=CSERVER☺57=TRADE☺11=1☺15=EUR☺21=1☺38=0.01☺40=1☺54=1☺55=1☺59=1☺60=20220310-10:37:36.898☺10=130☺
Does anyone miss any fields or see any errors in the message. Thank you very much.
This might be a bit late...but I had a similar issue and changed my fix message to the following:
Tag 50 (sendersubid) - Should be a random string of characters for that particular session you are logging in with and should be changed with every new login. If you log in multiple times with the same sendersubid, I think the server rejects.
Tag 38 (lot size) - you used "0.01". Try instead "1000"
"CSERVER" should be "cServer"
hope it helps
Hello I am trying to scrape the tweets of a certain user using tweepy.
Here is my code :
tweets = []
username = 'example'
count = 140 #nb of tweets
try:
# Pulling individual tweets from query
for tweet in api.user_timeline(id=username, count=count, include_rts = False):
# Adding to list that contains all tweets
tweets.append((tweet.text))
except BaseException as e:
print('failed on_status,',str(e))
time.sleep(3)
The problem I am having is the tweets are coming back unfinished with "..." at the end.
I think I've looked at all the other similar problems on stack overflow and elsewhere but nothing works. Most do not concern me because I am NOT dealing with retweets .
I have tried putting tweet_mode = 'extended' and/or tweet.full_text or tweet._json['extended_tweet']['full_text'] in different combinations .
I don't get an error message but nothing works, just an empty list in return.
And It looks like the documentation is out of date because it says nothing about the 'tweet_mode' nor the 'include_rts' parameter :
Has anyone managed to get the full text of each tweet?? I'm really stuck on this seemingly simple problem and am losing my hair so I would appreciate any advice :D
Thanks in advance!!!
TL;DR: You're most likely running into a Rate Limiting issue. And use the full_text attribute.
Long version:
First,
The problem I am having is the tweets are coming back unfinished with "..." at the end.
From the Tweepy documentation on Extended Tweets, this is expected:
Compatibility mode
... It will also be discernible that the text attribute of the Status object is truncated as it will be suffixed with an ellipsis character, a space, and a shortened self-permalink URL to the Tweet.
Wrt
And It looks like the documentation is out of date because it says nothing about the 'tweet_mode' nor the 'include_rts' parameter :
They haven't explicitly added it to the documentation of each method, however, they specify that tweet_mode is added as a param:
Standard API methods
Any tweepy.API method that returns a Status object accepts a new tweet_mode parameter. Valid values for this parameter are compat and extended , which give compatibility mode and extended mode, respectively. The default mode (if no parameter is provided) is compatibility mode.
So without tweet_mode added to the call, you do get the tweets with partial text? And with it, all you get is an empty list? If you remove it and immediately retry, verify that you still get an empty list. ie, once you get an empty list result, check if you keep getting an empty list even when you change the params back to the one which worked.
Based on bug #1329 - API.user_timeline sometimes returns an empty list - it appears to be a Rate Limiting issue:
Harmon758 commented on Feb 13
This API limitation would manifest itself as exactly the issue you're describing.
Even if it was working, it's in the full_text attribute, not the usual text. So the line
tweets.append((tweet.text))
should be
tweets.append(tweet.full_text)
(and you can skip the extra enclosing ())
Btw, if you're not interested in retweets, see this example for the correct way to handle them:
Given an existing tweepy.API object and id for a Tweet, the following can be used to print the full text of the Tweet, or if it’s a Retweet, the full text of the Retweeted Tweet:
status = api.get_status(id, tweet_mode="extended")
try:
print(status.retweeted_status.full_text)
except AttributeError: # Not a Retweet
print(status.full_text)
If status is a Retweet, status.full_text could be truncated.
As per the twitter API v2:
tweet_mode does not work at all. You need to add expansions=referenced_tweets.id. Then in the response, search for includes. You can find all the truncated tweets as full tweets in the includes. You will still see the truncated tweets in response but do not worry about it.
I am using soundcloud api through python SDK.
When I get the tracks data through 'Search',
the track attribute 'playback_count' seems to be
smaller than the actual count seen on the web.
How can I avoid this problem and get the actual playback_count??
(ex.
this track's playback_count gives me 2700,
but its actually 15k when displayed on the web
https://soundcloud.com/drumandbassarena/ltj-bukem-soundcrash-mix-march-2016
)
note: this problem does not occur for comments or likes.
following is my code
##Search##
tracks = client.get('/tracks', q=querytext, created_at={'from':startdate},duration={'from':startdur},limit=200)
outputlist = []
trackinfo = {}
resultnum = 0
for t in tracks:
trackinfo = {}
resultnum += 1
trackinfo["id"] = resultnum
trackinfo["title"] =t.title
trackinfo["username"]= t.user["username"]
trackinfo["created_at"]= t.created_at[:-5]
trackinfo["genre"] = t.genre
trackinfo["plays"] = t.playback_count
trackinfo["comments"] = t.comment_count
trackinfo["likes"] =t.likes_count
trackinfo["url"] = t.permalink_url
outputlist.append(trackinfo)
There is an issue with the playback count being incorrect when reported via the API.
I have encountered this when getting data via the /me endpoint for activity and likes to mention a couple.
The first image shows the information returned when accessing the sound returned for the currently playing track in the soundcloud widget
Information returned via the api for the me/activities endpoint
Looking at the SoundCloud website, they actually call a second version of the API to populate the track list on the user page. It's similar to the documented version, but not quite the same.
If you issue a request to https://api-v2.soundcloud.com/stream/users/[userid]?limit=20&client_id=[clientid] then you'll get back a JSON object showing the same numbers you see on the web.
Since this is an undocumented version, I'm sure it'll change the next time they update their website.
Using any Facebook API for Python, I am trying to get the # of people who shared my post and who those people are. I currently have the first part..
>>> from facepy import *
>>> graph = GraphAPI("CAAEr")
>>> g = graph.get('apple/posts?limit=20')
>>> g['data'][10]['shares']
That gets the count, but I want to know who those people are.
The sharedposts connection will give you more information about the shares of a post. You have to GET USER_ID?fields=sharedposts. This information doesn't appear in the doc.
This follows your code:
# Going through each of your posts one by one
for post in g['data']:
# Getting post ID
id = post['id'] # Gives USERID_POSTID
id[-17:] # We know that a post ID is represented by 17 numerals
# Another Graph request to get the shared posts
shares = graph.get(id + '?fields=sharedposts')
print('Post' id, 'was shared by:')
# Displays the name of each sharer
print share['from']['name'] for share in shares['data']
It is my first time with Python, there might be some syntax errors in the code. But you got the idea.
regarding this code from python-blogger
def listposts(service, blogid):
feed = service.Get('/feeds/' + blogid + '/posts/default')
for post in feed.entry:
print post.GetEditLink().href.split('/')[-1], post.title.text, "[DRAFT]" if is_draft(post) else ""
I want to know what fields exist in feed.entry but I'm not sure where to look in these docs to find out.
So I dont just want an answer. I want to know how I should've navigated the docs to find out for myself.
Try dir(field.entry)
It may be useful for your case.
It's a case of working through it, step by step.
The first thing I did was click on service on the link you sent... based on service = feed.Get(...)
Which leads here: http://gdata-python-client.googlecode.com/hg/pydocs/gdata.service.html
Then looking at .Get() it states
Returns:
If there is no ResultsTransformer specified in the call, a GDataFeed
or GDataEntry depending on which is sent from the server. If the
response is niether a feed or entry and there is no ResultsTransformer,
return a string. If there is a ResultsTransformer, the returned value
will be that of the ResultsTransformer function.
So guessing you've got a GDataFeed - as you're iterating over it:, and a quick google for "google GDataFeed" leads to: https://developers.google.com/gdata/jsdoc/1.10/google/gdata/Feed