How to customize hover information depending on variable value - python

I have a dataframe with several columns. I'm interested in displaying some of this info when hovering over a specific point using plotly with python. I've discovered how to do this using customdata=dataframme[["A","B"]] being provided to my go.scatter.
Problem is that when I create hovertemplate, it could happen I don't want to display some info it that value is not available.
This is the code snippet :
fig.add_trace(
go.Scatter(x=df['Request time'], y=df['Taken time'], name="System performance", hovertemplate=
"<br>Request period: %{text}<br>" +
"Taken time: %{y}<br>" +
"Error msg: %{customdata[0]}<br>" +
"LogCorr: %{customdata[1]}<br>",
customdata=df[["ai","logCorr"]],
text=df['Request period']),
secondary_y=True,
)
This is working fine, but if I leave this like it is, when "ai" value is empty, I see quite ugly hover
I was thinking if there is some way to display hover info when column value exists (in my case) and display nothing when it doesn't
Something so "easy" like "" if X==0 else "Error message : %{customdata[1]} in hovertemplate.
This is not working due to customdata means nothing to python in execution time. I've tried for looking which language is used in hovertemplate but no luck
Thanks a lot for your help
Sure, #Phoenix, Thanks for your help
Result,Request period,Request time,Taken time,rc,ac,ai,logCorr,rcvTS,to
0,2022.09.21D15:50:06-2022.09.21D15:51:06,2022-09-21 15:54:28.620446,0:00:00.392736,0,0,,2f85cfdd-9539-49d7-83af-8df98619dd52,2022-09-21 15:54:28.939000,2022-09-21 15:54:58.939000
0,2022.09.21D15:51:10-2022.09.21D15:52:10,2022-09-21 15:54:29.367677,0:00:00.367510,0,0,,18e61236-3b70-46e9-8b0d-c1bdc7557ed8,2022-09-21 15:54:29.662000,2022-09-21 15:54:59.662000
0,2022.09.21D15:52:19-2022.09.21D15:53:19,2022-09-21 15:54:30.085704,0:00:00.357543,0,0,,7c3784fb-481b-432b-80d9-4c1654bcb565,2022-09-21 15:54:30.383000,2022-09-21 15:55:00.383000
0,2022.09.21D15:53:26-2022.09.21D15:54:26,2022-09-21 15:54:30.792658,0:00:00.375392,0,0,Time out,b52126a5-7c1b-4898-811d-65446fdda65d,2022-09-21 15:54:31.092000,2022-09-21 15:55:01.092000
In this example latest line contains timeout error and rest of lines do not contain any error. When I try to run this, Time out is properly displayed, problem is that when there is no ai, I see what you see in my screen shot.
When a column in csv does not have info, pandas is reading it as nan

Related

Coverting InvluxQL v1 query to FLUX query in python -- getting last reading for every key-value tag

So I am new to InfluxDB (v1) and even newer to InfluxDB v2 and Flux . Please don't be an arse as I am really trying hard to get my python code working again.
Recently, I upgraded my flux database from v1.8 to 2.6. This has been an absolute challenge but I think I have things working for the most part. (At least inserting data back into the database.) Reading items out of the database, however, has been especially challenging as I can't get my python code to work.
This is what I previously used in my python code when I was running flux 1.8 and using FluxQL. Essentially I need to convert these FluxQL queries to FLUX and get the expected results
meterids = influx_client.query('show tag values with key ="meter_id"')
metervalues = influx_client.query('select last(reading) from meter_consumption group by \*;')
With flux v2.6 I must use FLUX queries. For 'meterids' I do the following and it seems to work. (This took me days to figure out.)
metervalues_list = []
query_api = influx_client.query_api()
querystr = 'from(bucket: "rtlamr_bucket") \
|\> range(start: -1h)\
|\> keyValues(keyColumns: \["meter_id"\])' # this gives a bunch of meters ids but formatted like \[('reading', '46259044'),'reading', '35515159'),...\]
result = query_api.query(query=querystr)
for table in result:
for record in table.records:
meterid_list.append((record.get_value()))
print('This is meterids: %s' %(meterid_list))
But when I try to pull actual last readings / value for each meter_id (the meter_consumption) I can't seem to get any Flux query to work. This is what i currently have:
#metervalues = influx_client.query('select last(reading) from meter_consumption group by \*;')
querystrconsumption = 'from(bucket: "rtlamr_bucket")\
|\> range(start: -2h)\
|\> filter(fn:(r) =\> r.\_measurement == "meter_consumption")\
|\> group(columns: \["\_time"\], mode: "by")\
|\> last()'
resultconsumption = query_api.query(query=querystrconsumption)
for tableconsumption in resultconsumption:
for recordconsumption in tableconsumption.records:
metervalues_list.append((record.get_value()))
print('\\n\\nThis is metervalues: %s' %(metervalues_list))
Not sure if this will help, but in v1.8 of influxdb these were my measurements, tags and fields:
Time: timestamp
Measurement: consumption <-- consumption is the "measurement name"
Key-Value Tags (meta): meter_id, meter_type
Key-Value Fields (data): <meter_consumption in gal, ccf, etc.>
Any thoughts, suggestions or corrections would be most greatly appreciated. Apologies if I am not using the correct terminology. I have tried reading tons of google articles but I can't seem to figure this one out. :(

Using "contains (text)" to find parent and following sibling in selenium with Python?

So I'm trying to build a tool to transfer tickets that I sell. A sale comes into my POS, I do an API call for the section, row, and seat numbers ordered (as well as other information obviously). Using the section, row, and seat number, I want to plug those values into a contains (text) statement to in order to find and select the right tickets on the host site.
Here is a sample of how the tickets are laid out:
And here is a screenshot (sorry if this is inconvenient) of the DOM related to one of the rows above:
Given this, how should I structure my contains(text) statement so that it is able to find and select the correct seats? I am very new/inexperienced with automation. I messed around with it a few months ago with some success and have managed to get a tool that gets me right up to selecting the seats but the "div" path confuses me when it comes to searching for text that is tied to other text.
I tried the following structure:
for i in range(int(lowseat), int(highseat)):
web.find_element_by_xpath('//*[contains (text(), "'+section+'")]/following-sibling::[contains text(), "'+row+'")]/following-sibling::[contains text(), "'+str(i)+'")]').click()
to no avail. Can someone help me explain how to structure these statements correctly so that it searches for section, row, and seat number correctly?
Thanks!
Also, if needed, here is a screenshot with more context of the button (in cases its needed). Button is highlighted in sky blue:
you can't use text() for that because it's in nested elements. You probably want to map all these into dicts and select with filter.
Update
Here's an idea for a lazy way to do this (untested):
button = driver.execute_script('''
return [...document.querySelectorAll('button')].find(b => {
return b.innerText.match(/Section 107\b.*Row P.*Seat 10\b/)
})
''')

Python folium: Present content dependent on fields=['id'] in GeoJsonPopup

I created a Map using python folium in jupyter lab. On the Map I display some geoJson-Files as shapes.
What Works so far:
The Shapes from the GeoJson file are displayed nicely on the map. I can change the color of the shapes based on a self generated style_function which checks feature['properties']['id'] to adjust the style type accordingly.
I'm also able to get a GeoJsonPopup on_click to a shape. The Popup shows id and the content of the id property of that shape.
geo_popup = folium.GeoJsonPopup(fields=['id'])
my_json = folium.GeoJson(file.path, style_function=style_function, popup=geo_popup)
my_json.add_to(map)
What I want:
I want to display in the popup some content based on the id. Very basic Example: if id = 1 i want to display 'This is the region Alpha' or if id = 2 -> 'This area is beautiful'.
Alternatively, if that is not possible, I would like to present a Link in the Popup where i can Access a Page with a parameter to show dedicated content for that id.
What I tried
I tried to derive a class from folium.GeoJsonPopup and somehow write content to the render function. But, however, I don't really get how it works and therefor all I did wasn't successful. Probably I took somewhere the wrong path and the solution is pretty easy.
Thanks for advice!
I followed the linked sample in the comment to the question. Therefore I had to add the needed dict entries to the the features properties.
Therefore I can link to this question. I used the .update from the solutions last comment to add the values.

python lxml xpath AttributeError (NoneType) with correct xpath and usually working

I am trying to migrate a forum to phpbb3 with python/xpath. Although I am pretty new to python and xpath, it is going well. However, I need help with an error.
(The source file has been downloaded and processed with tagsoup.)
Firefox/Firebug show xpath: /html/body/table[5]/tbody/tr[position()>1]/td/a[3]/b
(in my script without tbody)
Here is an abbreviated version of my code:
forumfile="morethread-alte-korken-fruchtweinkeller-89069-6046822-0.html"
XPOSTS = "/html/body/table[5]/tr[position()>1]"
t = etree.parse(forumfile)
allposts = t.xpath(XPOSTS)
XUSER = "td[1]/a[3]/b"
XREG = "td/span"
XTIME = "td[2]/table/tr/td[1]/span"
XTEXT = "td[2]/p"
XSIG = "td[2]/i"
XAVAT = "td/img[last()]"
XPOSTITEL = "/html/body/table[3]/tr/td/table/tr/td/div/h3"
XSUBF = "/html/body/table[3]/tr/td/table/tr/td/div/strong[position()=1]"
for p in allposts:
unreg=0
username = None
username = p.find(XUSER).text #this is where it goes haywire
When the loop hits user "tompson" / position()=11 at the end of the file, I get
AttributeError: 'NoneType' object has no attribute 'text'
I've tried a lot of try except else finallys, but they weren't helpful.
I am getting much more information later in the script such as date of post, date of user registry, the url and attributes of the avatar, the content of the post...
The script works for hundreds of other files/sites of this forum.
This is no encode/decode problem. And it is not "limited" to the XUSER part. I tried to "hardcode" the username, then the date of registry will fail. If I skip those, the text of the post (code see below) will fail...
#text of getpost
text = etree.tostring(p.find(XTEXT),pretty_print=True)
Now, this whole error would make sense if my xpath would be wrong. However, all the other files and the first numbers of users in this file work. it is only this "one" at position()=11
Is position() uncapable of going >10 ? I don't think so?
Am I missing something?
Question answered!
I have found the answer...
I must have been very tired when I tried to fix it and came here to ask for help. I did not see something quite obvious...
The way I posted my problem, it was not visible either.
the HTML I downloaded and processed with tagsoup had an additional tag at position 11... this was not visible on the website and screwed with my xpath
(It probably is crappy html generated by the forum in combination with tagsoups attempt to make it parseable)
out of >20000 files less than 20 are afflicted, this one here just happened to be the first...
additionally sometimes the information is in table[4], other times in table[5]. I did account for this and wrote a function that will determine the correct table. Although I tested the function a LOT and thought it working correctly (hence did not inlcude it above), it did not.
So I made a better xpath:
'/html/body/table[tr/td[#width="20%"]]/tr[position()>1]'
and, although this is not related, I ran into another problem with unxpected encoding in the html file (not utf-8) which was fixed by adding:
parser = etree.XMLParser(encoding='ISO-8859-15')
t = etree.parse(forumfile, parser)
I am now confident that after adjusting for strange additional and multiple , and tags my code will work on all files...
Still I will be looking into lxml.html, as I mentioned in the comment, I have never used it before, but if it is more robust and may allow for using the files without tagsoup, it might be a better fit and save me extensive try/except statements and loops to fix the few files screwing with my current script...

quickfix : how to get Symbol ( flag 55 ) from messages?

I'm running QuickFix with the Python API and connecting to a TT FIX Adapter using FIX4.2
I am logging on and sending a market data request for two instruments. That works fine and data from the instruments comes in as expected. I can get all kinds of information from the messages.
However, I am having trouble getting the Symbol (flag 55) field.
import quickfix as fix
def fromApp(self, message, sessionID):
ID = fix.Symbol()
message.getField(ID)
print ID
This works for the very first message [the initial Market Data Snapshot (flag 35 = W)] that comes to me. Once I start getting incremental refreshes (flag 35 = X), I can no longer get the Symbol field. Every message that arrives results in a Field Not Found error.
This is confusing me because in the logs, the Symbol field is always present, whether the message type is W or X.
Thinking the Symbol is in the header of refresh messages, I tried get.Field(ID) when 35 = W and get.Header().getField(ID) when 35 = X, however this did not work.
Can somebody help me figure out what is going on here? I would like to be able to explicitly tell my computer what instruments it is looking at.
Thanks
Your question is pretty simple, but you've mixed in some misconceptions as well.
1) Symbol will never be in the header. It is a body field.
2) In X messages, the symbol is in a repeating group. You first have to get a group object with msg.GetGroup(), then get the symbol from that. See this example code, from the repeating groups doc page.
3) In W messages, the symbol is not in a group. That's why it works for you there.
It seems clear you are pretty new to QuickFIX and FIX in general. I think you should take few minutes and skim through the "Working with Messages" section of the docs.
Also, the FIXimate website can be your best friend.

Categories

Resources