Google analytics fetch user id in reporting API - python

I like to fetch user id which is available in user explorer report google analytics.
I am using below batchGet to get the list of user ids using ga:clientId
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet
I am able to get the client ids, but when trying the same id with below API
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/userActivity/search#request-body
Its returning 400 error not found.
Even if I copy the user id which is visible in user explorer reporting dashboard in google analytics it still return 400 error not found.
Is there anything I am doing wrong?
Code snippet
analytics = build('analyticsreporting', 'v4', credentials=credentials)
body={
"viewId": VIEW_ID,
"user": {
"type": "USER_ID", # I have tried CLIENT_ID Also
"userId": user_id # For now I have copied the value directly from the user explorer from browser itself for testing.But it didn't worked
}
}
result=analytics.userActivity().search(body=body).execute()
Response
Traceback (most recent call last): File "ga_session_data.py", line
192, in
ga.main() File "ga_session_data.py", line 178, in main
result=analytics.userActivity().search(body=body).execute() File "env/lib/python3.6/site-packages/googleapiclient/_helpers.py",
line 130, in positional_wrapper
return wrapped(*args, **kwargs) File "env/lib/python3.6/site-packages/googleapiclient/http.py",
line 856, in execute
raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: https://analyticsreporting.googleapis.com/v4/userActivity:search?alt=json
returned "CLIENT_ID: XXXXXXXX not found.">

User ID and client ID are two distinct dimensions in Google Analytics. User explorer report is based on user ID and this id might differ from client Id that appears in API report under ga:clientId dimension.
To use Activity reports based on client Id value, use the following object in your Activity request:
{
"type": "CLIENT_ID",
"userId": "your.value"
}
In order to get data for particular User ID that appears User explorer report use the following object:
{
"type": "USER_ID",
"userId": "your.value"
}
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/userActivity/search#request-body

This is a weird bug in Google Analytics' Reporting API. I stumbled upon the same problem and have figured out what is wrong here. When fetching a user recently recorded in Google Analytics, you can't simply query it with User ID because somehow google tries to fetch results from the previous week.
Why Not Found?
The response from GA Reporting API is half the story, when the GA Reporting API says USER_ID: {xxx-yyy-zzz} not found it never means that user is never recorded by GA, Instead what it means is that The User you requested data for is not found in this date range
Solution
Make use of date while fetching users this way you are safe from USER_ID: {xxx-yyy-zzz} not found error.
Via Analytics Reporting API
https://analyticsreporting.googleapis.com/v4/userActivity:search:
{
"user": {
"type": "USER_ID",
"userId": "your-custom-user-id"
},
"viewId": "XXXXYYYY",
"dateRange": {
"startDate": "2022-11-12",
"endDate": "2022-11-16"
}
}
via Hello Analytics (PHP):
composer require google/apiclient:^2.0
$client = new Client();
$client->setAuthConfig(storage_path('app/analytics/service-account-credentials.json'));
$client->addScope(\Google_Service_Analytics::ANALYTICS_READONLY);
// Create the DateRange object.
$dateRange = new Google_Service_AnalyticsReporting_DateRange();
$dateRange->setStartDate("7daysAgo");
$dateRange->setEndDate("today");
$analytics = new Google_Service_AnalyticsReporting($client);
$user = new Google_Service_AnalyticsReporting_User();
$user->setType("USER_ID"); // pass "CLIENT_ID" for using Client ID
$user->setUserId("your-custom-user-id"); //User ID
$userActivityRequest = new Google_Service_AnalyticsReporting_SearchUserActivityRequest();
$userActivityRequest->setViewId(env('ANALYTICS_VIEW_ID'));
$userActivityRequest->setDateRange($dateRange);
$userActivityRequest->setUser($user);
// Passing params is optional
$param = [
];
$sessions = $analytics->userActivity->search($userActivityRequest, $param);
More info regarding this method can be found here
More Ways
Python
Java
*Please Consider suggesting edits to improve this answer.

Related

How to transfers files with Google Workspace Admin SDK (python)

I am trying to write a program that transfers users drive and docs files from one user to another. It looks like I can do it using this documentation Documentation.
I created the data transfer object, which looks like this:
datatransfer = {
'kind': 'admin#datatransfer#DataTransfer',
'oldOwnerUserId': 'somenumberhere',
'newOwnderUserId': 'adifferentnumberhere',
'applicationDataTransfers':[
{
'applicationId': '55656082996', #the app id for drive and docs
'applicationTransferParams': [
{
'key': 'PRIVACY_LEVEL',
'value': [
{
'PRIVATE',
'SHARED'
}
]
}
]
}
]
}
I have some code here for handling Oauth and then I bind the service with:
service = build('admin', 'datatransfer_v1', credentials=creds)
Then I attempt to call insert() with
results = service.transfers().insert(body=datatransfer).execute()
and I get back an error saying that it 'missing required field: resource'.
I tried nesting all of this inside a field called resource and I get the same message.
I tried passing in JUST a json structure that looked like this {'resource': 'test'} and I get the same message.
So I tried using the "Try this method" live tool on the documentation website,
If I pass in no arguments at all, or just pass in the old and new user, I get the same message 'missing required nested field: resource'.
If I put in 'id':'55656082996' with ANY other arguments it just says error code 500 backend error.
I tried manually adding a field named "resource" to the live tool and it says 'property 'resource' does not exist in object specification"
I finally got this to work. If anyone else is struggling with this and stumbles on this, "applicationId" is a number, not a string. Also, the error message is misleading - there is no nested field called "resource." This is what worked for me:
datatransfer = {
"newOwnerUserId": "SomeNumber",
"oldOwnerUserId": "SomeOtherNumber",
"kind": "admin#datatransfer#DataTransfer",
"applicationDataTransfers": [
{
"applicationId": 55656082996,
"applicationTransferParams": [
{
"key": "PRIVACY_LEVEL"
},
{
"value": [
"{PRIVATE, SHARED}"
]
}
]
}
]
}
service = build('admin', 'datatransfer_v1', credentials=creds)
results = service.transfers().insert(body=datatransfer).execute()
print(results)
To get the user's Id's I'm first using the Directory API to query all users who are suspended, and getting their ID from that. Then passing their ID into this to transfer their files to another user before deleting them.

Push data to Campaign Monitor using Python

Campaign Monitor is a service where we can send emails to a set of subscribers. We can create multiple lists within Campaign Monitor and add the required users to these lists as subscribers(to whom we can send personalised emails). So, here I am trying to send a set of customers' details like their name, emails, total_bookings, and first_booking to the campaign monitor's list using the API in Python so that I can send emails to this set of users.
More details on campaign monitor: https://www.campaignmonitor.com/api/v3-3/subscribers/
I am new to using Campaign Monitor. I have searched documentation, a lot of posts and blogs for examples on how to push data with multiple custom fields to Campaign Monitor using Python. By default, a list in Campaign Monitor will have a name and an email that can be added, but I want to add other details for each subscriber(here I want to add total_bookings and first_booking data) and Campaign Monitor provides custom fields to achieve this.
For instance:
I have my data stored in a redshift table named customer_details with the fields name, email, total_bookings, first_booking. I was able to retrieve this data from redshift table using Python with the following code.
# Get the data from the above table:
cursor = connection.cursor()
cursor.execute("select * from customer_details")
creator_details = cursor.fetchall()
# Now I have the data as a list of sets in creator_details
Now I want to push this data to a list in the Campaign Monitor using API like request.put('https://api.createsend.com/api/../.../..'). But I am not sure on how to do this. Can someone please help me here.
400 indicated invalid parameters
we can first see the request is POST not PUT
so first change requests.put to requests.post
the next thing is that all the variables need to be sent either as www-formdata or as json body data not sure which
and lastly you almost certainly cannot verify with basic auth ... but maybe
something like the following
some_variables = some_values
...
header = {"Authorization": f"Bearer {MY_API_KEY}"}
data = {"email":customer_email,"CustomFields":[{"key":"total_bookings","value":customer_details2}]}
url = f'https://api.createsend.com/api/v3.3/subscribers/{my_list_id}.json'
res = requests.post(url,json=data,headers=header)
print(res.status_code)
try:
print(res.json())
except:
print(res.content)
after looking more into the API docs it looks like this is the expected request
{
"EmailAddress": "subscriber#example.com",
"Name": "New Subscriber",
"MobileNumber": "+5012398752",
"CustomFields": [
{
"Key": "website",
"Value": "http://example.com"
},
{
"Key": "interests",
"Value": "magic"
},
{
"Key": "interests",
"Value": "romantic walks"
}
],
"Resubscribe": true,
"RestartSubscriptionBasedAutoresponders": true,
"ConsentToTrack":"Yes"
}
which we can see has "EmailAddress" not "email" so you would need to do
data = {"EmailAddress":customer_email,"CustomFields":[{"key":"total_bookings","value":customer_details2}]}
Im not sure if all of the fields are required or not ... so you may also need to provide "Name","MobileNumber",Resubscribe",etc
and looking at "Getting Started" it looks like the publish a python package to make interfacing simpler
http://campaignmonitor.github.io/createsend-python/
which makes it as easy as
import createsend
cli = createsend.CreateSend({"api_key":MY_API_KEY})
cli.subscriber.add(list_id,"user#email.com","John Doe",custom_fields,True,"No")
(which I found here https://github.com/campaignmonitor/createsend-python/blob/master/test/test_subscriber.py#L70)

Microsoft Graph API Synchronisation API, update secret token only works at second call

I'm implementing (in Python with the Microsoft Graph API) the creation of Azure AD application based on the AWS template. I'm stuck when implementing the automatic role provisioning like describe in this documentation : https://learn.microsoft.com/fr-fr/graph/application-provisioning-configure-api?tabs=http#step-3-authorize-access
When I call the servicePrincipals/{id}/synchronization/secrets API for the first time just after the creation of the synchronization job, I receive a HTTP error (400 - Bad Request) with the following body :
{
"error": {
"code": "BadRequest",
"message": "The credentials could not be saved. This is due to an internal storage issue in the Microsoft Azure AD service. For information on how to address this issue, please refer to https://go.microsoft.com/fwlink/?linkid=867915",
"innerError": {
"code": "CredentialStorageBadRequest",
"details": [],
"message": "The credentials could not be saved. This is due to an internal storage issue in the Microsoft Azure AD service. For information on how to address this issue, please refer to https://go.microsoft.com/fwlink/?linkid=867915",
"target": null,
"innerError": {
"code": "CredentialStorageBadRequest",
"details": [],
"message": "Message:The credentials could not be saved. This is due to an internal storage issue in the Microsoft Azure AD service. For information on how to address this issue, please refer to https://go.microsoft.com/fwlink/?linkid=867915",
"target": null
},
"date": "2021-01-05T15:53:59",
"request-id": "---",
"client-request-id": "---"
}
}
}
When a do a second same call (with MS Graph Explorer, Postman or directly in Python), it works, the second call returns an HTTP 204 like expected ! So I think my request is correct.
This is my implementation (which works because I retry the call a second time…) :
# Default value :
GRAPH_API_URL = "https://graph.microsoft.com/beta/{endpoint}"
class Azure:
# […]
# self._http_headers contains my token to access to MS Graph API
# self._aws_key_id and self._aws_access_key contains AWS credentials
def _save_sync_job_auth(self, principal_id):
self._put(
f"servicePrincipals/{principal_id}/synchronization/secrets",
{"value": [
{"key": "ClientSecret", "value": self._aws_key_id},
{"key": "SecretToken", "value": self._aws_access_key},
]},
retry=1 # If I put 0 here, my script fail
)
# […]
def _put(self, endpoint, json, retry=0):
return self._http_request(requests.put, endpoint, retry, json=json)
# […]
def _http_request(self, func, endpoint, retry=0, **kwargs):
url = GRAPH_API_URL.format(endpoint=endpoint)
response = func(url, headers=self._http_headers, **kwargs)
try:
response.raise_for_status()
except requests.HTTPError as e:
if retry:
logging.warning(f"Error when calling {func.__name__.upper()} {url}")
return self._http_request(func, endpoint, retry - 1, **kwargs)
else:
raise e
return response
Am I missing something ? Have you a solution to remove this "retry hack" ?

How do I give my application the capability to make an AdUser api call

I am trying to programmatically create and manage facebook advertising campaigns.
The following is my python code:
my_app_id = 'MyAppId'
my_app_secret = 'MyAppSecret'
my_access_token = 'MyAccessToken'
FacebookAdsApi.init(my_app_id, my_app_secret, my_access_token)
me = AdUser(fbid='MyFbId')
my_accounts = list(me.get_ad_accounts())
print(my_accounts)
Sadly, when I run this function I get an error from the following line:
my_accounts = list(me.get_ad_accounts())
The error is:
facebookads.exceptions.FacebookRequestError:
Message: Call was not successful
Method: GET
Path: https://graph.facebook.com/v2.8/643866195/adaccounts
Params: {'summary': 'true'}
Status: 400
Response:
{
"error": {
"message": "(#3) Application does not have the capability to make this API call.",
"code": 3,
"type": "OAuthException",
"fbtrace_id": "H3BFNpSClup"
}
}
I have tried messing with a few things to resolve this. One thing I thought would work was to add my account ID to the Authorized Ad Account Ids in the Facebook Develop App settings page but that didn't help.
Thanks.

Extracting BigQuery Data From a Shared Dataset

Is it possible to extract data (to google cloud storage) from a shared dataset (where I have only have view permissions) using the client APIs (python)?
I can do this manually using the web browser, but cannot get it to work using the APIs.
I have created a project (MyProject) and a service account for MyProject to use as credentials when creating the service using the API. This account has view permissions on a shared dataset (MySharedDataset) and write permissions on my google cloud storage bucket. If I attempt to run a job in my own project to extract data from the shared project:
job_data = {
'jobReference': {
'projectId': myProjectId,
'jobId': str(uuid.uuid4())
},
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
I get the error:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Value 'myProjectId' in content does not agree with value
sharedProjectId'. This can happen when a value set through a parameter
is inconsistent with a value set in the request.">
Using the sharedProjectId in both the jobReference and sourceTable I get:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Access Denied: Job myJobId: The user myServiceAccountEmail
does not have permission to run a job in project sharedProjectId">
Using myProjectId for both the job immediately comes back with a status of 'DONE' and with no errors, but nothing has been exported. My GCS bucket is empty.
If this is indeed not possible using the API, is there another method/tool that can be used to automate the extraction of data from a shared dataset?
* UPDATE *
This works fine using the API explorer running under my GA login. In my code I use the following method:
service.jobs().insert(projectId=myProjectId, body=job_data).execute()
and removed the jobReference object containing the projectId
job_data = {
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
but this returns the error
Access Denied: Table sharedProjectId:sharedDatasetId.sharedTableId: The user 'serviceAccountEmail' does not have permission to export a table in
dataset sharedProjectId:sharedDatasetId
My service account now is an owner on the shared dataset and has edit permissions on MyProject, where else do permissions need to be set or is it possible to use the python API using my GA login credentials rather than the service account?
* UPDATE *
Finally got it to work. How? Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)
After trying to reproduce the issue, I was running into the parse errors.
I did how ever play around with the API on the Developer Console [2] and it worked.
What I did notice is that the request code below had a different format than the documentation on the website as it has single quotes instead of double quotes.
Here is the code that I ran to get it to work.
{
'configuration': {
'extract': {
'sourceTable': {
'projectId': "sharedProjectID",
'datasetId': "sharedDataSetID",
'tableId': "sharedTableID"
},
'destinationUri': "gs://myBucket/myFile.csv"
}
}
}
HTTP Request
POST https://www.googleapis.com/bigquery/v2/projects/myProjectId/jobs
If you are still running into problems, you can try the you can try the jobs.insert API on the website [2] or try the bq command tool [3].
The following command can do the same thing:
bq extract sharedProjectId:sharedDataSetId.sharedTableId gs://myBucket/myFile.csv
Hope this helps.
[2] https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert
[3] https://cloud.google.com/bigquery/bq-command-line-tool
Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)

Categories

Resources