I'm getting data from Google Analytics 4 API https://developers.google.com/analytics/devguides/reporting/data/v1/api-schema?hl=en My numbers in GA4 and UA differ a lot, so I'm wondering if it is related to sampling and how to query without sampling
Currently I try like this. Just getting the next 100.000 rows until it fails
while True:
print("offset: " + str(offset))
request = {
"requests": [
{
"dateRanges": [
{
"startDate": "180daysAgo",
"endDate": "today"
}
],
"dimensions": [{'name': name} for name in dimensions],
"metrics": [{'name': name} for name in metrics],
"offset": offset,
"limit": 100000
}
]
}
# Make Request
response = analytics_GA4.properties().batchRunReports(property=property_id, body=request).execute()
#Stop Loop
if response.get("reports")[0].get("rows") == None:
break;
else:
offset = offset + 100000
Related
So I'm new to graphQL and I've been figuring out the Uniswap API, through the sandbox browser, but I'm running this program which just gets metadata on the top 100 tokens and their relative pools, but the pool one isn't working at all. I'm trying to put two conditions of if token0's hash is this and token1's hash is this, it should output the pool of those two, however if only outputs pools with the token0 hash, and just ignores the second one. I've tried using and, _and, or two where's seperated by {} or , so on so forth. This is an example I have (python btw):
class ExchangePools:
def QueryPoolDB(self, hash1, hash2):
query = """
{
pools(where: {token0: "%s"}, where: {token1:"%s"}, first: 1, orderBy:volumeUSD, orderDirection:desc) {
id
token0 {
id
symbol
}
token1 {
id
symbol
}
token1Price
}
}""" % (hash1, hash2)
return query
or in the sandbox explorer this:
{
pools(where: {token0: "0x2260fac5e5542a773aa44fbcfedf7c193bc2c599"} and: {token1:"0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"}, first: 1, orderBy:volumeUSD, orderDirection:desc) {
id
token0 {
id
symbol
}
token1 {
id
symbol
}
token1Price
}
}
with this output:
{
"data": {
"pools": [
{
"id": "0x4585fe77225b41b697c938b018e2ac67ac5a20c0",
"token0": {
"id": "0x2260fac5e5542a773aa44fbcfedf7c193bc2c599",
"symbol": "WBTC"
},
"token1": {
"id": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
"symbol": "WETH"
},
"token1Price": "14.8094450357546760737720184457113"
}
]
}
}
How can I get the API to register both statements?
I am trying to create an way to actively monitor sales/listings. I have a created POST api that will call the website and it currently is successfully pulling down the JSON data that has the item that sold, the price, the time, seller, and buyer.
Here's an example of the data returned in the JSON.
Item: 1
Price: 50$
Sold Date: 10/28/2021 10:00AM
Seller: John
Buyer: Frank
Say this script runs every 5 minutes and prints out the last sale. At 10:05, it sees Item 1 sold so it prints out the data. At 10:10, no new items sold, so it prints out Item 1 again. I am looking for a way to only print out if Item 2 sells(aka updated JSON data), but I am having trouble figuring out the best way to handle this logic in python.
Would you just use date/time minus the last 5mins? Or is there a better way?
Code is simple:
asset_url = 'www.sample.com/api/'
seller_payload = json.dumps({
"name": "find",
"arguments": [
{
"database": "prod",
"data": "SALES",
"query": {
"seller": {
"$in": [
"sellerid"
]
},
},
"sort": {
"epoch": {
"$numberInt": "-1"
}
},
"limit": {
"$numberInt": "1"
}
}
],
"service": "db"
})
seller_response = requests.request("POST", asset_url, headers=profile_headers, data=seller_payload)
asset_id = seller_response[0]['asset']
seller = seller_response[0]['seller']
price = seller_response[0]['price']
print(asset_id)
print(seller)
print(price)
If the script runs every 5 minutes, store each transaction in a file. Something like this:
from pickle import dump, loads
transactions = []
try:
transactions = loads("transactions.txt") # try to read transactions from file.
except:
print("An exception occurred.")
# Check if the last transaction is the same as it was five minutes ago.
if (not (transactions[-1]["asset"] == seller_response[0]["asset"] && transactions[-1]["seller"] == seller_response[0]["seller"] && transactions[-1]["soldDate"] == seller_response[0]["soldDate"] && transactions[-1]["buyer"] == seller_response[0]["buyer"] && transactions[-1]["item"] == seller_response[0]["item"])):
print(seller_response[0])
transactions.push(seller_response[0])
dump(transactions,"transactions.txt")
Am struggling to capture the results from the IBM Watson entity analysis in a dictionary. I would like to extract the sentiment of each link through a function. I have a function created to extract a single url. But the dictionary am trying to store the results captures only the last url results. I am new to Python, and appreciate any help.
Here is my entity analysis code,
# function to process an URL
def processurl(url_to_analyze):
# end point
endpoint = f"{URL}/v1/analyze"
# credentials
username = "apikey"
password = API_KEY
# parameters
parameters = {
"version": "2020-08-01"
}
# headers
headers = {
"Content-Type":"application/json"
}
# watson options
watson_options = {
"url": url_to_analyze,
"features": {
"entities": {
"sentiment": True,
"emotion": True,
"limit":10
}
}
}
# return
response = requests.post(endpoint,
data=json.dumps(watson_options),
headers=headers,
params=parameters,
auth=(username,password)
)
return response.json()
here is the function I created to pass the result from above
# create a function to extract the entities from the result data
def getentitylist(data,threshold):
result = []
for entity in data["entities"]:
relevance = float(entity["relevance"])
if relevance > threshold:
result.append(entity["text"])
return result
After looping through the URL's, I can't seemed to store the result in a dictionary so that I can pass that to my function for entity results
# method II: loop through news api urls and perform entity analysis and store it in a dictionary
entitydict = {}
for url in url_to_analyze:
entitydict.update(processurl(url))
I can't see where you are calling getentitylist, but in your url loop
entitydict = {}
for url in url_to_analyze:
entitydict.update(processurl(url))
update will be updating the dictionary based on key values. ie. this will overwrite the values for any keys already in the dictionary. As your response will look something like:
{
"usage": {
"text_units": 1,
"text_characters": 2708,
"features": 1
},
"retrieved_url": "http://www.cnn.com/",
"language": "en",
"entities": [
{
"type": "Company",
"text": "CNN",
"sentiment": {
"score": 0.0,
"label": "neutral"
},
"relevance": 0.784947,
"disambiguation": {
"subtype": [
"Broadcast",
"AwardWinner",
"RadioNetwork",
"TVNetwork"
],
"name": "CNN",
"dbpedia_resource": "http://dbpedia.org/resource/CNN"
},
"count": 9
}
]
}
The keys that will be updated are at the top level ie. usage, retrieved_url, retrieved_url, entities. So entitydict will only contain the response for the last url, as previous values for these keys will get overwritten.
What you should be doing is use the url as key to each response.
entitydict = {}
for url in url_to_analyze:
entitydict.update({url : processurl(url)})
This is the portion of my code that I'm having issues with.
table_data_insert_all_request_body = {
"kind": "bigquery#tableDataInsertAllRequest",
"skipInvalidRows": True,
"ignoreUnknownValues": True,
"templateSuffix": 'suffix',
"rows": [
{
"json": {
("one"): ("two"),
("three"): ("four")
}
}
]
}
request = service.tabledata().insertAll(projectId=projectId, datasetId=datasetId, tableId=tableId, body=table_data_insert_all_request_body)
response = request.execute()
If I print response, I get the response:
{u'kind': u'bigquery#tableDataInsertAllResponse'}
I can assess the project, dataset and even the table but I cant update the values in the table. What do I need to do differently? Obviously I don't want to enter two values but I cant get anything to upload. Once I can get something to upload I'll be able to get rows working.
Even though its tough to tell without looking at your schema, I am pretty sure your json data is not correct.
Here is what I use.
Bodyfields = {
"kind": "bigquery#tableDataInsertAllRequest",
"rows": [
{
"json": {
'col_name_1': 'row 1 value 1',
'col_name_2': 'row 1 value 2'
}
},
{
"json": {
'col_name_1': 'row 2 value 1',
'col_name_2': 'row 2 value 2'
}
}
]
}
I have a lot of data (json format) in Amazon SQS. I basically have a simple python script which pulls data from the SQS queue & then indexes it in ES. My problem is even though i have specified in my script to index as "not_analyzed", i still see my index filed as "analyzed" in index setting of kibana4 dashboard
Here is my python code :
doc = {
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"type_name": {
"dynamic_templates": [
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "string",
"index": "not_analyzed"
}
}
}
]
}
}
}
es = Elasticsearch()
h = { "Content-type":"application/json" }
res = requests.request("POST","http://localhost:9200/"+index_name+"/",headers=h,data=json.dumps(doc))
post = es.index(index=index_name , doc_type='server' , id =1 , body=json.dumps(new_list))
print "------------------------------"
print "Data Pushed Successfully to ES"
I am not sure what's wrong here?
The doc_type you're using when indexing (= server) doesn't match the one you have in your index mappings (= type_name).
So if you index your documents like this instead, it will work
post = es.index(index=index_name , doc_type='type_name' , id =1 , body=json.dumps(new_list))
^
|
change this