I'm trying to retrieve the field 'biddingStrategyConfiguration' via Adwords API for Python (3) using CampaignService(), but I always get an weird error. It's weird because the field does exist, as mentioned in the documentation found here.
account_id = 'any_id'
adwords = Adwords(account_id) # classes and objects already created, etc.
def get_bidding_strategy():
service = adwords.client.GetService('CampaignService', version = 'v201806')
selector = {
'fields': ['Id', 'Name', 'Status', 'biddingStrategyConfiguration']
}
results = service.get(selector)
data = []
if 'entries' in results:
for item in results['entries']:
if item['status'] == 'ENABLED':
data.append({
'id': item['id'],
'name': item['name'],
'status': item['status'] # i have to retrieve biddingStrategyConfiguration.biddingStrategyName (next line)
})
return results
This is the error:
Error summary:
{'faultMessage': "[SelectorError.INVALID_FIELD_NAME # serviceSelector; trigger:'biddingStrategyConfiguration']",
'requestId': '000581286e61247e0a376ac776062df4',
'serviceName': 'CampaignService',
'methodName': 'get',
'operations': '1',
'responseTime': '315'}
Notice that fields like "id" or "name" are easily retrievable, but the bidding configuration is not. In fact, I'm looking for the id/name of the biddingStrategies using .biddingStrategyID or .biddingStrategyName.
Can anyone help me? Thanks in advance.
How I solved it: biddingStrategyConfiguration is not a retrievable field, but biddingStrategyName is (part of the JSON).
account_id = 'any_id'
adwords = Adwords(account_id) # classes and objects already created, etc.
def get_bidding_strategy():
service = adwords.client.GetService('CampaignService', version = 'v201806')
selector = {
'fields': ['Id', 'Name', 'Status', 'biddingStrategyName']
}
results = service.get(selector)
Related
I am trying to retrieve Twitter data using Tweepy, using that below code, but I'm having difficulties in collecting media_fields data. Especially, I want to get the type of media, but I failed.
As you can see below, the value is copied and exists in the cell that should be empty.
[enter image description here][1]
import tweepy
from twitter_authentication import bearer_token
import time
import pandas as pd
client = tweepy.Client(bearer_token, wait_on_rate_limit=True)
hoax_tweets = []
for response in tweepy.Paginator(client.search_all_tweets,
query = 'Covid hoax -is:retweet lang:en',
user_fields = ['username', 'public_metrics', 'description', 'location','verified','entities'],
tweet_fields=['id', 'in_reply_to_user_id', 'referenced_tweets', 'context_annotations',
'source', 'created_at', 'entities', 'geo', 'withheld', 'public_metrics',
'text'],
media_fields=['media_key', 'type', 'url', 'alt_text',
'public_metrics','preview_image_url'],
expansions=['author_id', 'in_reply_to_user_id', 'geo.place_id',
'attachments.media_keys','referenced_tweets.id','referenced_tweets.id.author_id'],
place_fields=['id', 'name', 'country_code', 'place_type', 'full_name', 'country',
'geo', 'contained_within'],
start_time = '2021-01-20T00:00:00Z',
end_time = '2021-01-21T00:00:00Z',
max_results=100):
time.sleep(1)
hoax_tweets.append(response)
result = []
user_dict = {}
media_dict = {}
# Loop through each response object
for response in hoax_tweets:
# Take all of the users, and put them into a dictionary of dictionaries with the info we want to keep
for user in response.includes['users']:
user_dict[user.id] = {'username': user.username,
'followers': user.public_metrics['followers_count'],
'tweets': user.public_metrics['tweet_count'],
'description': user.description,
'location': user.location,
'verified': user.verified
}
for media in response.includes['media']:
media_dict[tweet.id] = {'media_key':media.media_key,
'type':media.type
}
for tweet in response.data:
# For each tweet, find the author's information
author_info = user_dict[tweet.author_id]
# Put all of the information we want to keep in a single dictionary for each tweet
result.append({'author_id': tweet.author_id,
'username': author_info['username'],
'author_followers': author_info['followers'],
'author_tweets': author_info['tweets'],
'author_description': author_info['description'],
'author_location': author_info['location'],
'author_verified':author_info['verified'],
'tweet_id': tweet.id,
'text': tweet.text,
'created_at': tweet.created_at,
'retweets': tweet.public_metrics['retweet_count'],
'replies': tweet.public_metrics['reply_count'],
'likes': tweet.public_metrics['like_count'],
'quote_count': tweet.public_metrics['quote_count'],
'in_reply_to_user_id':tweet.in_reply_to_user_id,
'media':tweet.attachments,
'media_type': media,
'conversation':tweet.referenced_tweets
})
# Change this list of dictionaries into a dataframe
df = pd.DataFrame(result)
Also, when I change the code ''media':tweet.attachments' to 'media':tweet.attachments[0] to get 'media_key' data, I get the following error message."TypeError: 'NoneType' object is not subscriptable"
What am I doing wrong? Any suggestions would be appreciated.
[1]: https://i.stack.imgur.com/AxCcl.png
The subscriptable error comes from the fact that tweet.attachments is None, from here the NoneType part. To make it work, you can add a check for None:
'media':tweet.attachments[0] if tweet.attachments else None
I have never used the twitter API, but one thing is to make sure the tweet attachments are always present or if they may be absent.
I am getting the tweets and the corresponding id of that user in an object obj. I want to know why I don't get the other informations like conversation_id. I want to use it to get the replies and the quotes. That's the solution that I found in the internet but didn't know how to make it work.
Does any anyone know to extract the conversation_id or any other parameters like geo.place_id? I am using tweepy but if anyone has any other solution using another library to get the same result it will be also helpful. Thanks for your help!!!
You can try the code if you create another file config and define your tokens. I can't share mine due to security purposes.
import tweepy
import config
users_name = ['derspiegel', 'zeitonline']
tweet_tab = []
def getClient():
client = tweepy.Client(bearer_token=config.BEARER_TOKEN,
consumer_key=config.API_KEY,
consumer_secret=config.API_KEY_SECRET,
access_token=config.ACCESS_TOKEN,
access_token_secret=config.ACCESS_TOKEN_SECRET)
def searchTweets(client):
for i in users_name:
client = getClient()
user = client.get_user(username=i)
userId = user.data.id
tweets = client.get_users_tweets(userId,
expansions=[
'author_id', 'referenced_tweets.id', 'referenced_tweets.id.author_id',
'in_reply_to_user_id', 'attachments.media_keys', 'entities.mentions.username', 'geo.place_id'],
tweet_fields=[
'id', 'text', 'author_id', 'created_at', 'conversation_id', 'entities',
'public_metrics', 'referenced_tweets'
],
user_fields=[
'id', 'name', 'username', 'created_at', 'description', 'public_metrics',
'verified'
],
place_fields=['full_name', 'id'],
media_fields=['type', 'url', 'alt_text', 'public_metrics'])
if not tweets is None and len(tweets) > 0:
obj = {}
obj['id'] = userId
obj['text'] = tweets
tweet_tab.append(obj)
return tweet_tab
searchTweets(client)
print("tableau final", tweet_tab)
my guess is that you need to put the ids into a list through which the function can iterate. Create the id list and try:
def get_tweets_from_timelines():
tweets_timelines_list = []
for id in range(0, len(ids), 1):
one_id = (ids[id:id+1])
one_id = ' '.join(one_id)
for tweet in tweepy.Paginator(client.get_users_tweets, id=one_id, max_results=100,
tweet_fields=['attachments', 'author_id', 'context_annotations', 'created_at', 'entities', \
'conversation_id', 'possibly_sensitive', 'public_metrics', 'referenced_tweets', \
'reply_settings', 'source', 'withheld' ],\
user_fields=['created_at', 'description', 'entities', 'profile_image_url', 'protected', \
'public_metrics', 'url', 'verified', 'withheld'],
expansions=['referenced_tweets.id', 'in_reply_to_user_id', 'attachments.media_keys', ],
media_fields=['preview_image_url'],
):
tweets_timelines_list.append(tweet)
return tweets_timelines_list
I created a helper function to update customer data in a class. When I call the function it updates all the information. If I try it again on my tests... it creates a new entry into the customer database, creating a duplicate..
I cant figure out if its my logic or what I am doing wrong. Can someone assist.
I am using the HTTP requests package
Initiate class
self.customer = Sq_Customer(
first_name = 'Testy',
last_name = 'McTesty',
email = 'McTesty#testy.com',
phone= '123-456-7890'
)
Update user JSON and Function call
self.data = {
'given_name': 'Dummy',
'email_address': dummy_account#testing.com',
'address': {
'address_line_1': '1234 Main Street',
'address_line_2': '',
'locality': 'New York',
'administrative_district_level_1': 'NY',
'postal_code': '11413',
'country': 'US'
}
}
self.update_customer = self.customer.update_customer_acct(Customer ID, self.data)
My Helper Function
def update_customer_acct(self, user_id, data):
'''
Update Customer information.
'''
self.customer = self.get_customer(user_id)
if self.customer['customer']['id'] == user_id:
self.update_customer_data = self.connect.put('/v2/customers/' + user_id, data)
return self.__sqware_json_decoder(self.update_customer_data)
else:
return '{}'.format('There is no account associated with that ID.')
I removed the email and there was no longer a duplicate. It's an issue with me creating a customer during the test. When the email is changed it creates a new account. Thanks again.
anyone know how to reference a field value in a model when it's described with the 'property' decorator?
I have an 'order' model with a property decorator which totals a number of values in fields related to my 'order' class via a foreign key.
#property
def total_price(self):
"""
The total value of all items in the order, minus any discounts.
"""
total = sum([item.total_price
for item in self.order_orderitem_set.all()])
discounts = sum([item.discount
for item in self.order_orderitem_set.all()])
return total - discountsenter code here
When I reference this type it's quite simple. I do:
myOrders = Orders.objects.all()
for key in myOrders:
print "My total is: ", key.total_price
However if I use the source attribute as: Orders.objects.all and try and reference this value 'total_orders' Chartit provides me with an error that it can't find this field value.
My chartit datapool looks like:
orderdata = \
DataPool(
series=
[{'options': {
'source': Order.objects.all()},
'terms': [
'order_date',
'total_price']}
])
#Step 2: Create the Chart object
cht = Chart(
datasource = orderdata,
series_options =
[{'options':{
'type': 'line',
'stacking': False},
'terms':{
'order_date': [
'total_price']
}}],
chart_options =
{'title': {
'text': 'Total Orders Over Time'},
'xAxis': {
'title': {
'text': 'Order Date'}}})
I get the error:
Field u'total_price' does not exist. Valid lookups are promo_code, enduser_address, etc....
It looks like to me it is not able to reference my 'property' within the model. Is this just a limitation of the framework?
Does anyone know of a neat way of getting round this - it seems my options are:
1) Create my own json object and iterate around my 'orders' and create my own data list. Then pass this to highcharts directly; or
2) Create another table say 'OrderSaleHistory' and populate for each month via a management function that django will update periodically or from manual action. This new table will then be passed to chartit.
Or does anyone have better ideas? :-)
This is my first 'post' so quite a newbie in posting but not reading!!!!
Kind regards, Nicholas.
I am looking for a way to find out whether an item with a certain label and description already exists on Wikidata. This task should be performed by the Pywikibot. I don't want my Bot to create a new item if it already exists. So far, my code looks like this:
...
def check_item_existence(self):
transcript_file = self.transcript_file
with open(transcript_file) as csvfile:
transcript_dict = csv.DictReader(csvfile, delimiter="\t")
for row in transcript_dict:
site = pywikibot.Site("en", "TillsWiki")
existing_item = pywikibot.ItemPage(site, row['Name'])
title = existing_item.title()
You can use the wbsearchentities api module from the Wikibase API. The code to check whether any item with specific English label exists in WikiData is:
from pywikibot.data import api
...
def wikiitemexists(label):
params = {'action': 'wbsearchentities', 'format': 'json',
'language': 'en', 'type': 'item', 'limit':1,
'search': label}
request = api.Request(site=acta_site, **params)
result = request.submit()
return True if len(result['search'])>0 else False
Notice that the labels in Wikidata are not unique and that API search for aliases as well.