Extracting a value from python variable - python

When I run the following code I get an output containing a value (1.11113) that I want to use within the code (after this first section). The full outputI get is shown after the code. Basically what I'm trying to do is extract a real time forex (stock) value to use in an order. This order would be placed after this initial code within the same python module. Thanks for your help.
import json
from oandapyV20.contrib.requests import MarketOrderRequest
from oandapyV20.contrib.requests import TakeProfitDetails, StopLossDetails
import oandapyV20.endpoints.orders as orders
import oandapyV20
import oandapyV20.endpoints.pricing as pricing
from exampleauth import exampleAuth
import argparse
from oandapyV20 import API
from oandapyV20.exceptions import V20Error
import oandapyV20.endpoints.instruments as instruments
from oandapyV20.definitions.instruments import CandlestickGranularity
import re
import oandapyV20.endpoints.pricing as pricing
# pricef=float(price)
# parser.add_argument('--price', choices=price, default='M', help='Mid/Bid/Ask')
accountID, access_token = exampleAuth()
api = oandapyV20.API(access_token=access_token)
params = {"instruments": "EUR_USD"}
r = pricing.PricingInfo(accountID=accountID, params=params)
rv = api.request(r)
print(rv)
OUTPUT
{'prices': [{'asks': [{'liquidity': 10000000, 'price': '1.11132'}],
'bids': [{'liquidity': 10000000, 'price': '1.11113'}],
'closeoutAsk': '1.11132',
'closeoutBid': '1.11113',
'instrument': 'EUR_USD',
'quoteHomeConversionFactors': {'negativeUnits': '1.00000000',
'positiveUnits': '1.00000000'},
'status': 'tradeable',
'time': '2020-05-31T23:02:34.271983628Z',
'tradeable': True,
'type': 'PRICE',
'unitsAvailable': {'default': {'long': '3852555',
'short': '3852555'},
'openOnly': {'long': '3852555',
'short': '3852555'},
'reduceFirst': {'long': '3852555',
'short': '3852555'},
'reduceOnly': {'long': '0', 'short': '0'}}}],
'time': '2020-05-31T23:02:40.672716661Z'}

Your output is a dictionary with a number of nested lists and dictionaries.
To access a value from a dictionary, you use the same syntax as when accessing members of a list, only that the key does not have to be a number, but can be any data type, commonly Strings. So rv['time'] in your case would yield '2020-05-31T23:02:40.672716661Z'.
Since the number 1.11113 appears twice in the dictionary, here are the two pointers that would access the corresponding field:
rv['prices'][0]['bids'][0]['price']
and
rv['prices'][0]['closeoutBid']
This will be in String format, so to use it as a number, you would have to convert it using float()
Also notice the occasional [0] to access the first element of a list.

Looks like you want:
rv['prices'][0]['bids'][0]['price']
at least for this case. It might not be that you always want the first price entry or the first bid entry, in which case you might want to do some sort of sorting or filtering on whatever criteria you want to use to pick the right entry from among more than one.

Related

Pandas list of JSON element to python array

I have a dataframe like so:
Store
matches
Murphy's
[{'domain': 'murphyscolumbus.com', 'location': 'Columbus, OH'}, {'domain': 'murphystampa.com', 'location': 'Tampa, FL'}]
Bill's
[{'domain': 'billsdallas.com', 'location': 'Dallas, TX'}, {'domain': 'billsorlando.com', 'location': 'Orlando, FL'}]
What I want is a dataframe like so:
Store
domains
Murphy's
['murphyscolumbus.com', 'murphystampa.com']
Bill's
['billsdallas.com','billsorlando.com']
I'm hoping for something less computationally expensive than something like a for loop that steps through as the dataframe is quite large.
Try:
from ast import literal_eval
# apply literal_eval if necessary
df["matches"] = df["matches"].apply(literal_eval)
df["domains"] = df.pop("matches").apply(lambda x: [d["domain"] for d in x])
print(df)
Prints:
Store domains
0 Murphy's [murphyscolumbus.com, murphystampa.com]
1 Bill's [billsdallas.com, billsorlando.com]

List comprehension to iterate through dataframe

I have written code to encode one row of a dataframe to json, as follows:
def encode_df_metadata_row(df):
return {'name': df['Title'].values[0], 'code': df['Code'].values[0], 'frequency': df['Frequency'].values[0], 'description': df['Subtitle'].values[0], 'source': df['Source'].values[0]}
Now I would like to encode an entire dataframe to json with some transformation, so I wrote this function:
def encode_metadata_list(df_metadata):
return [encode_df_metadata_row(df_row) for index, df_row in df_metadata.iterrows()]
I then try to call the function using this code:
df_oodler_metadata = pd.read_csv('DATA\oodler-datasets-metadata.csv')
response = encode_metadata_list(df_oodler_metadata)
print(response)
When I run this code, I get the following error:
AttributeError: 'str' object has no attribute 'values'
I've tried a bunch of variations but I keep getting similar errors. Does someone know the right way to do this?
DataFrame.iterrows yields pairs of index and row, where each row is a Series object. It stores a single element for each column, so the .values[0] part in your encode_df_metadata_row(df) function becomes irrelevant - the correct form of this function should be:
def encode_df_metadata_row(row):
return {'name': row['Title'], 'code': row['Code'], 'frequency': row['Frequency'], 'description': row['Subtitle'], 'source': row['Source']}

Returning an empty dictionary for the datasource of a datatable in plotly/dash

My callback function reads a value selected by the user (site name) and then queries data for that site and returns 3 figures and one dictionary (df.to_dict('records') to supply the data for a datatable.
If the user selects a site for which there is no data, I return {}. That seems to break it. If I select a site, the data table fills in properly, switch to another site, same thing. Once I select a site with no data, the data table will no longer update, no matter which site I select.
Some relevant code:
The output is defined as:
Output('emission_table','data'),
The return from the callback is:
return time_series_figure,emissions_df.to_dict('records'),site_map,hotspot_figure
html.Div(style={'float':'left','padding':'5px','width':'49%'}, children = [
dash_table.DataTable(id='emission_table', data=[],columns=[
# {'id': "site", 'name': "Site"},
{'id': "dateformatted", 'name': "date"},
{'id': "device", 'name': "device"},
{'id': "emission", 'name': "Emission"},
{'id': "methane", 'name': "CH4"},
{'id': "wdir", 'name': "WDIR"},
{'id': "wspd", 'name': "WSPD"},
{'id': "wd_std", 'name': "WVAR"}],
# {'id': "url", 'name':'(Link for Google Maps)','presentation':'markdown'}],
fixed_rows={'headers': True},
row_selectable='multi',
style_table={'height': '500px', 'overflowY': 'auto'},
style_cell={'textAlign': 'left'})
]),
Any ideas what is happening? Is there a better way for the callback to return an empty data source for the datatable?
Thanks!
You haven't shared enough of your code (your callback specifically) to see what is happening exactly, however:
If the user selects a site for which there is no data, I return {}
is at least one reason why it doesn't work. The data property of a Dash Datatable needs to be a list and not a dictionary. You can however put dictionaries inside the list. Each dictionary inside the list corresponds to a row in the Data Table.
So to re-iterate and more directly answer your question:
Is there a better way for the callback to return an empty data source for the datatable?
Yes you can return a list with any number of dictionaries inside.

Use list of indices to manipulate a nested dictionary

I'm trying to perform operations on a nested dictionary (data retrieved from a yaml file):
data = {'services': {'web': {'name': 'x'}}, 'networks': {'prod': 'value'}}
I'm trying to modify the above using the inputs like:
{'services.web.name': 'new'}
I converted the above to a list of indices ['services', 'web', 'name']. But I'm not able to/not sure how to perform the below operation in a loop:
data['services']['web']['name'] = new
That way I can modify dict the data. There are other values I plan to change in the above dictionary (it is extensive one) so I need a solution that works in cases where I have to change, EG:
data['services2']['web2']['networks']['local'].
Is there a easy way to do this? Any help is appreciated.
You may iterate over the keys while moving a reference:
data = {'networks': {'prod': 'value'}, 'services': {'web': {'name': 'x'}}}
modification = {'services.web.name': 'new'}
for key, value in modification.items():
keyparts = key.split('.')
to_modify = data
for keypart in keyparts[:-1]:
to_modify = to_modify[keypart]
to_modify[keyparts[-1]] = value
print(data)
Giving:
{'networks': {'prod': 'value'}, 'services': {'web': {'name': 'new'}}}

Using Python CSV DictReader to create multi-level nested dictionary

Total Python noob here, probably missing something obvious. I've searched everywhere and haven't found a solution yet, so I thought I'd ask for some help.
I'm trying to write a function that will build a nested dictionary from a large csv file. The input file is in the following format:
Product,Price,Cost,Brand,
blue widget,5,4,sony,
red widget,6,5,sony,
green widget,7,5,microsoft,
purple widget,7,6,microsoft,
etc...
The output dictionary I need would look like:
projects = { `<Brand>`: { `<Product>`: { 'Price': `<Price>`, 'Cost': `<Cost>` },},}
But obviously with many different brands containing different products. In the input file, the data is ordered alphabetically by brand name, but I know that it becomes unordered as soon as DictReader executes, so I definitely need a better way to handle the duplicates. The if statement as written is redundant and unnecessary.
Here's the non-working, useless code I have so far:
def build_dict(source_file):
projects = {}
headers = ['Product', 'Price', 'Cost', 'Brand']
reader = csv.DictReader(open(source_file), fieldnames = headers, dialect = 'excel')
current_brand = 'None'
for row in reader:
if Brand != current_brand:
current_brand = Brand
projects[Brand] = {Product: {'Price': Price, 'Cost': Cost}}
return projects
source_file = 'merged.csv'
print build_dict(source_file)
I have of course imported the csv module at the top of the file.
What's the best way to do this? I feel like I'm way off course, but there is very little information available about creating nested dicts from a CSV, and the examples that are out there are highly specific and tend not to go into detail about why the solution actually works, so as someone new to Python, it's a little hard to draw conclusions.
Also, the input csv file doesn't normally have headers, but for the sake of trying to get a working version of this function, I manually inserted a header row. Ideally, there would be some code that assigns the headers.
Any help/direction/recommendation is much appreciated, thanks!
import csv
from collections import defaultdict
def build_dict(source_file):
projects = defaultdict(dict)
headers = ['Product', 'Price', 'Cost', 'Brand']
with open(source_file, 'rb') as fp:
reader = csv.DictReader(fp, fieldnames=headers, dialect='excel',
skipinitialspace=True)
for rowdict in reader:
if None in rowdict:
del rowdict[None]
brand = rowdict.pop("Brand")
product = rowdict.pop("Product")
projects[brand][product] = rowdict
return dict(projects)
source_file = 'merged.csv'
print build_dict(source_file)
produces
{'microsoft': {'green widget': {'Cost': '5', 'Price': '7'},
'purple widget': {'Cost': '6', 'Price': '7'}},
'sony': {'blue widget': {'Cost': '4', 'Price': '5'},
'red widget': {'Cost': '5', 'Price': '6'}}}
from your input data (where merged.csv doesn't have the headers, only the data.)
I used a defaultdict here, which is just like a dictionary but when you refer to a key that doesn't exist instead of raising an Exception it simply makes a default value, in this case a dict. Then I get out -- and remove -- Brand and Product, and store the remainder.
All that's left I think would be to turn the cost and price into numbers instead of strings.
[modified to use DictReader directly rather than reader]
Here I offer another way to satisfy your requirement(different from DSM)
Firstly, this is my code:
import csv
new_dict={}
with open('merged.csv','rb')as csv_file:
data=csv.DictReader(csv_file,delimiter=",")
for row in data:
dict_brand=new_dict.get(row['Brand'],dict())
dict_brand[row['Product']]={k:row[k] for k in ('Cost','Price')}
new_dict[row['Brand']]=dict_brand
print new_dict
Briefly speaking, the main point to solve is to figure out what the key-value pairs are in your requirements. According to your requirement,it can be called as a 3-level-dict,here the key of first level is the value of Brand int the original dictionary, so I extract it from the original csv file as
dict_brand=new_dict.get(row['Brand'],dict())
which is going to judge if there exists the Brand value same as the original dict in our new dict, if yes, it just inserts, if no, it creates, then maybe the most complicated part is the second level or middle level, here you set the value of Product of original dict as the value of the new dict of key Brand, and the value of Product is also the key of the the third level dict which has Price and Cost of the original dict as the value,and here I extract them like:
dict_brand[row['Product']]={k:row[k] for k in ('Cost','Price')}
and finally, what we need to do is just set the created 'middle dict' as the value of our new dict which has Brand as the key.
Finally, the output is
{'sony': {'blue widget': {'Price': '5', 'Cost': '4'},
'red widget': {'Price': '6', 'Cost': '5'}},
'microsoft': {'purple widget': {'Price': '7', 'Cost': '6'},
'green widget': {'Price': '7', 'Cost': '5'}}}
That's that.

Categories

Resources