How to use conditionals on Uniswap GraphQL API pool method? - python

So I'm new to graphQL and I've been figuring out the Uniswap API, through the sandbox browser, but I'm running this program which just gets metadata on the top 100 tokens and their relative pools, but the pool one isn't working at all. I'm trying to put two conditions of if token0's hash is this and token1's hash is this, it should output the pool of those two, however if only outputs pools with the token0 hash, and just ignores the second one. I've tried using and, _and, or two where's seperated by {} or , so on so forth. This is an example I have (python btw):
class ExchangePools:
def QueryPoolDB(self, hash1, hash2):
query = """
{
pools(where: {token0: "%s"}, where: {token1:"%s"}, first: 1, orderBy:volumeUSD, orderDirection:desc) {
id
token0 {
id
symbol
}
token1 {
id
symbol
}
token1Price
}
}""" % (hash1, hash2)
return query
or in the sandbox explorer this:
{
pools(where: {token0: "0x2260fac5e5542a773aa44fbcfedf7c193bc2c599"} and: {token1:"0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"}, first: 1, orderBy:volumeUSD, orderDirection:desc) {
id
token0 {
id
symbol
}
token1 {
id
symbol
}
token1Price
}
}
with this output:
{
"data": {
"pools": [
{
"id": "0x4585fe77225b41b697c938b018e2ac67ac5a20c0",
"token0": {
"id": "0x2260fac5e5542a773aa44fbcfedf7c193bc2c599",
"symbol": "WBTC"
},
"token1": {
"id": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
"symbol": "WETH"
},
"token1Price": "14.8094450357546760737720184457113"
}
]
}
}
How can I get the API to register both statements?

Related

Pymongo include only the fields which are starting with a name

For example, if this is my record
{
"_id":"123",
"name":"google",
"ip_1":"10.0.0.1",
"ip_2":"10.0.0.2",
"ip_3":"10.0.1",
"ip_4":"10.0.1",
"description":""}
I want to get only those fields starting with 'ip_'. Consider I have 500 fields & only 15 of them start with 'ip_'
Can we do something like this to get the output -
db.collection.find({id:"123"}, {'ip*':1})
Output -
{
"ip_1":"10.0.0.1",
"ip_2":"10.0.0.2",
"ip_3":"10.0.1",
"ip_4":"10.0.1"
}
The following aggregate query, using PyMongo, returns documents with the field names starting with "ip_".
Note the various aggregation operators used: $filter, $regexMatch, $objectToArray, $arrayToObject. The aggregation pipeline the two stages $project and $replaceWith.
pipeline = [
{
"$project": {
"ipFields": {
"$filter" : {
"input": { "$objectToArray": "$$ROOT" },
"cond": { "$regexMatch": { "input": "$$this.k" , "regex": "^ip" } }
}
}
}
},
{
"$replaceWith": { "$arrayToObject": "$ipFields" }
}
]
pprint.pprint(list(collection.aggregate(pipeline)))
I am unaware of a way to specify an expression that would decide which hash keys would be projected. MongoDB has projection operators but they deal with arrays and text search.
If you have a fixed possible set of ip fields, you can simply request all of them regardless of which fields are present in a particular document, e.g. project with
{ip_1: true, ip_2: true, ...}

Cant Access Data in List of Dictionaries

I'm trying to access some data within nested ordered dictionaries. This dictionary was created by using the XMLTODICT module. Obviously I would like to create my own dictionaries but this one is out of my control.
I've tried to access them numerous ways.
Example:
Using a for loop:
I can access the first level using v["name"] which gives me Child_Policy and Parent Policy
When I do v["class"]["name"] I would expect to get "Test1" but that's not the case.
I've also tried v[("class", )] variations as well with no luck.
Any input would be much appreciated
The data below is retrieved from a device via XML and converted to dictionary with XMLTODICT.
[
{
"#xmlns": "http://cisco.com/ns/yang/Cisco-IOS-XE-policy",
"name": "Child_Policy",
"class": [
{
"name": "Test1",
"action-list": {
"action-type": "bandwidth",
"bandwidth": {
"percent": "30"
}
}
},
{
"name": "Test2",
"action-list": {
"action-type": "bandwidth",
"bandwidth": {
"percent": "30"
}
}
}
]
},
{
"#xmlns": "http://cisco.com/ns/yang/Cisco-IOS-XE-policy",
"name": "Parent_Policy",
"class": {
"name": "class-default",
"action-list": [
{
"action-type": "shape",
"shape": {
"average": {
"bit-rate": "10000000"
}
}
},
{
"action-type": "service-policy",
"service-policy": "Child_Policy"
}
]
}
}
]
My expectations result is to retrieve values from the nested dictionary and produce and output similar to this:
Queue_1: Test1
Action_1: bandwidth
Allocation_1: 40
Queue_2: Test2
Action_2: bandwidth
Allocation_2: 10
I have now issue formatting the output, just getting the values is the issue.
#
I had some time tonight so I changed the code be be dynamic:
int = 0
int_2 = 0
for v in policy_dict.values():
print("\n")
print("{:15} {:<35}".format("Policy: ", v[0]["name"]))
print("_______")
for i in v:
int_2 = int_2 + 1
try:
print("\n")
print("{:15} {:<35}".format("Queue_%s: " % int_2, v[0]["class"][int]["name"]))
print("{:15} {:<35}".format("Action_%s: " % int_2, v[0]["class"][int]["action-list"]["action-type"]))
print("{:15} {:<35}".format("Allocation_%s: " % int_2, v[0]["class"][int]["action-list"]["bandwidth"]["percent"]))
int = int + 1
except KeyError:
break
pass
According to the sample you posted you can try to retrieve values like:
v[0]["class"][0]["name"]
This outputs:
Test1

How to generate a word cloud using elasticsearch?

I have an elasticsearch DB with data of the form
record = {#all but age are strings
'diagnosis': self.diagnosis,
'vignette': self.vignette,
'symptoms': self.symptoms_list,
'care': self.care_level_string,
'age': self.age, #float
'gender': self.gender
}
I want to create a word cloud of the data in vignette.
I tried all sorts of queries and I get error 400, meaning I don't understand how to query the database.
I am using python
This is the only successful query I was able to come up with
def search_phrase_in_vignettes(self, phrase):
body = {
"_source": ["vignette"],
"query": {
"match_phrase": {
"vignette": {
"query": phrase,
}
}
}
}
res = self.es.search(index=self.index_name, doc_type=self.doc_type, body=body)
Which finds any record with phrase contained in the field `'vignette'
I am thinking some aggregation should do the trick, but I can't seem to be able to write a correct query with 'aggr'.
Would love some help on how to correctly write even the simplest query with aggregation in python.
Use terms aggregation for the approach words count. Your query will be:
{
"query": {
"match_phrase": {
"vignette": {
"query": phrase,
}
}
},
"aggs" : {
"cloud" : {
"terms" : { "field" : "vignette" }
}
}
}
When you receive results take buckets from aggregations key:
res = self.es.search(index=self.index_name, doc_type=self.doc_type, body=body)
for bucket in res['aggregations']['cloud']['buckets']:
rest of build cloud

Add a Protected Range to an existing NamedRange

I have an existing worksheet with an existing NamedRange for it and I would like to call the batch_update method of the API to protect that range from being edited by anyone other than the user that makes the batch_update call.
I have seen an example on how to add protected ranges via a new range definition, but not from an existing NamedRange.
I know I need to send the addProtectedRangeResponse request. Can I define the request body with a Sheetname!NamedRange notation?
this_range = worksheet_name + "!" + nrange
batch_update_spreadsheet_request_body = {
'requests': [
{
"addProtectedRange": {
"protectedRange": {
"range": {
"name": this_range,
},
"description": "Protecting xyz",
"warningOnly": False
}
}
}
],
}
EDIT: Given #Tanaike feedback, I adapted the call to something like:
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting via gsheets_manager",
"warningOnly": False,
"requestingUserCanEdit": False
}
}
}
]
}
res2 = service.spreadsheets().batchUpdate(spreadsheetId=ssId, body=body).execute()
print(res2)
But although it lists the new protections, it still lists 5 different users (all of them) as editors. If I try to manually edit the protection added by my gsheets_manager script, it complains that the range is invalid:
Interestingly, it seems to ignore the requestUserCanEdit flag according to the returning message:
{u'spreadsheetId': u'NNNNNNNNNNNNNNNNNNNNNNNNNNNN', u'replies': [{u'addProtectedRange': {u'protectedRange': {u'requestingUserCanEdit': True, u'description': u'Protecting via gsheets_manager', u'namedRangeId': u'1793914032', u'editors': {}, u'protectedRangeId': 2012740267, u'range': {u'endColumnIndex': 1, u'sheetId': 1196959832, u'startColumnIndex': 0}}}}]}
Any ideas?
How about using namedRangeId for your situation? The flow of the sample script is as follows.
Retrieve namedRangeId using spreadsheets().get of Sheets API.
Set a protected range using namedRangeId using spreadsheets().batchUpdate of Sheets API.
Sample script:
nrange = "### name ###"
ssId = "### spreadsheetId ###"
res1 = service.spreadsheets().get(spreadsheetId=ssId, fields="namedRanges").execute()
namedRangeId = ""
for e in res1['namedRanges']:
if e['name'] == nrange:
namedRangeId = e['namedRangeId']
break
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting xyz",
"warningOnly": False
}
}
}
]
}
res2 = service.spreadsheets().batchUpdate(spreadsheetId=ssId, body=body).execute()
print(res2)
Note:
This script supposes that Sheets API can be used for your environment.
This is a simple sample script. So please modify it to your situation.
References:
ProtectedRange
Named and Protected Ranges
If this was not what you want, I'm sorry.
Edit:
In my above answer, I modified your script using your settings. If you want to protect the named range, please modify body as follows.
Modified body
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting xyz",
"warningOnly": False,
"editors": {"users": ["### your email address ###"]}, # Added
}
}
}
]
}
By this, the named range can be modified by only you. I'm using such settings and I confirm that it works in my environment. But if in your situation, this didn't work, I'm sorry.

Why my index fields are still shown as "analyzed" even after i index em as "not_analyzed"

I have a lot of data (json format) in Amazon SQS. I basically have a simple python script which pulls data from the SQS queue & then indexes it in ES. My problem is even though i have specified in my script to index as "not_analyzed", i still see my index filed as "analyzed" in index setting of kibana4 dashboard
Here is my python code :
doc = {
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"type_name": {
"dynamic_templates": [
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "string",
"index": "not_analyzed"
}
}
}
]
}
}
}
es = Elasticsearch()
h = { "Content-type":"application/json" }
res = requests.request("POST","http://localhost:9200/"+index_name+"/",headers=h,data=json.dumps(doc))
post = es.index(index=index_name , doc_type='server' , id =1 , body=json.dumps(new_list))
print "------------------------------"
print "Data Pushed Successfully to ES"
I am not sure what's wrong here?
The doc_type you're using when indexing (= server) doesn't match the one you have in your index mappings (= type_name).
So if you index your documents like this instead, it will work
post = es.index(index=index_name , doc_type='type_name' , id =1 , body=json.dumps(new_list))
^
|
change this

Categories

Resources