I am trying to write a program that transfers users drive and docs files from one user to another. It looks like I can do it using this documentation Documentation.
I created the data transfer object, which looks like this:
datatransfer = {
'kind': 'admin#datatransfer#DataTransfer',
'oldOwnerUserId': 'somenumberhere',
'newOwnderUserId': 'adifferentnumberhere',
'applicationDataTransfers':[
{
'applicationId': '55656082996', #the app id for drive and docs
'applicationTransferParams': [
{
'key': 'PRIVACY_LEVEL',
'value': [
{
'PRIVATE',
'SHARED'
}
]
}
]
}
]
}
I have some code here for handling Oauth and then I bind the service with:
service = build('admin', 'datatransfer_v1', credentials=creds)
Then I attempt to call insert() with
results = service.transfers().insert(body=datatransfer).execute()
and I get back an error saying that it 'missing required field: resource'.
I tried nesting all of this inside a field called resource and I get the same message.
I tried passing in JUST a json structure that looked like this {'resource': 'test'} and I get the same message.
So I tried using the "Try this method" live tool on the documentation website,
If I pass in no arguments at all, or just pass in the old and new user, I get the same message 'missing required nested field: resource'.
If I put in 'id':'55656082996' with ANY other arguments it just says error code 500 backend error.
I tried manually adding a field named "resource" to the live tool and it says 'property 'resource' does not exist in object specification"
I finally got this to work. If anyone else is struggling with this and stumbles on this, "applicationId" is a number, not a string. Also, the error message is misleading - there is no nested field called "resource." This is what worked for me:
datatransfer = {
"newOwnerUserId": "SomeNumber",
"oldOwnerUserId": "SomeOtherNumber",
"kind": "admin#datatransfer#DataTransfer",
"applicationDataTransfers": [
{
"applicationId": 55656082996,
"applicationTransferParams": [
{
"key": "PRIVACY_LEVEL"
},
{
"value": [
"{PRIVATE, SHARED}"
]
}
]
}
]
}
service = build('admin', 'datatransfer_v1', credentials=creds)
results = service.transfers().insert(body=datatransfer).execute()
print(results)
To get the user's Id's I'm first using the Directory API to query all users who are suspended, and getting their ID from that. Then passing their ID into this to transfer their files to another user before deleting them.
Related
I am currently trying to retrieve login information from my Realtime DB in Firebase.
As an example, I want to detect whether the provided username already exists in the db.
Here is a code snippet I'm attemping to work with.
ref.child("Users").order_by_child("Username").equal_to("Hello").get()
It does not however work, and instead displays this message:
firebase_admin.exceptions.InvalidArgumentError: Index not defined, add ".indexOn": "Username", for path "/Users", to the rules
Is there something wrong with my code snippet or should I use a different alternative?
As the error suggests, you need to add ".indexOn": "Username" in your security rules as shown below:
{
"rules": {
"Users": {
".indexOn": "Username",
// ... rules
"$uid": {
// ... rules
}
}
}
}
Checkout the documentation for more information.
My python script to patch automatically has this one error in which the parameter InstanceIds is invalid. Where do I state the value for the InstanceIds in the script?
import boto3
ssm = boto3.client('ssm', region_name='us-east-1')
response = ssm.start_automation_execution(
Parameters={
'AutomationAssumeRole': [
'parameters'
]
},
DocumentName='document-name',
Mode='Auto',
TargetParameterName='test',
Targets=[
{
'Key': 'InstanceIds',
'Values': [ 'i-1234567890abcd' ]
}
],
MaxErrors='10'
)
This gives me the error message
Invalid target value for key InstanceIds
What am I doing wrong here?
tl;dr: Make sure 'InstanceId' is listed as an input parameter in your automation document, and then try updating the Target.Key value you're using to 'ParameterValues'.
This may depend somewhat on your implementation, but I was running into the same error as you, and my parameters matched yours, except that I was using TargetParameterName='InstanceId' instead of TargetParameterName='test'. I tried several different values for Target.Key and none of them worked, until I tried this and it worked:
Targets=[
{
'Key': "ParameterValues",
'Values': [
"i-012345abcdeff",
"i-012345abcdefg"
]
}
],
As an aside, I think they could probably stand to somewhat improve the error messages for this API for situations like this.
I like to fetch user id which is available in user explorer report google analytics.
I am using below batchGet to get the list of user ids using ga:clientId
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet
I am able to get the client ids, but when trying the same id with below API
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/userActivity/search#request-body
Its returning 400 error not found.
Even if I copy the user id which is visible in user explorer reporting dashboard in google analytics it still return 400 error not found.
Is there anything I am doing wrong?
Code snippet
analytics = build('analyticsreporting', 'v4', credentials=credentials)
body={
"viewId": VIEW_ID,
"user": {
"type": "USER_ID", # I have tried CLIENT_ID Also
"userId": user_id # For now I have copied the value directly from the user explorer from browser itself for testing.But it didn't worked
}
}
result=analytics.userActivity().search(body=body).execute()
Response
Traceback (most recent call last): File "ga_session_data.py", line
192, in
ga.main() File "ga_session_data.py", line 178, in main
result=analytics.userActivity().search(body=body).execute() File "env/lib/python3.6/site-packages/googleapiclient/_helpers.py",
line 130, in positional_wrapper
return wrapped(*args, **kwargs) File "env/lib/python3.6/site-packages/googleapiclient/http.py",
line 856, in execute
raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: https://analyticsreporting.googleapis.com/v4/userActivity:search?alt=json
returned "CLIENT_ID: XXXXXXXX not found.">
User ID and client ID are two distinct dimensions in Google Analytics. User explorer report is based on user ID and this id might differ from client Id that appears in API report under ga:clientId dimension.
To use Activity reports based on client Id value, use the following object in your Activity request:
{
"type": "CLIENT_ID",
"userId": "your.value"
}
In order to get data for particular User ID that appears User explorer report use the following object:
{
"type": "USER_ID",
"userId": "your.value"
}
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/userActivity/search#request-body
This is a weird bug in Google Analytics' Reporting API. I stumbled upon the same problem and have figured out what is wrong here. When fetching a user recently recorded in Google Analytics, you can't simply query it with User ID because somehow google tries to fetch results from the previous week.
Why Not Found?
The response from GA Reporting API is half the story, when the GA Reporting API says USER_ID: {xxx-yyy-zzz} not found it never means that user is never recorded by GA, Instead what it means is that The User you requested data for is not found in this date range
Solution
Make use of date while fetching users this way you are safe from USER_ID: {xxx-yyy-zzz} not found error.
Via Analytics Reporting API
https://analyticsreporting.googleapis.com/v4/userActivity:search:
{
"user": {
"type": "USER_ID",
"userId": "your-custom-user-id"
},
"viewId": "XXXXYYYY",
"dateRange": {
"startDate": "2022-11-12",
"endDate": "2022-11-16"
}
}
via Hello Analytics (PHP):
composer require google/apiclient:^2.0
$client = new Client();
$client->setAuthConfig(storage_path('app/analytics/service-account-credentials.json'));
$client->addScope(\Google_Service_Analytics::ANALYTICS_READONLY);
// Create the DateRange object.
$dateRange = new Google_Service_AnalyticsReporting_DateRange();
$dateRange->setStartDate("7daysAgo");
$dateRange->setEndDate("today");
$analytics = new Google_Service_AnalyticsReporting($client);
$user = new Google_Service_AnalyticsReporting_User();
$user->setType("USER_ID"); // pass "CLIENT_ID" for using Client ID
$user->setUserId("your-custom-user-id"); //User ID
$userActivityRequest = new Google_Service_AnalyticsReporting_SearchUserActivityRequest();
$userActivityRequest->setViewId(env('ANALYTICS_VIEW_ID'));
$userActivityRequest->setDateRange($dateRange);
$userActivityRequest->setUser($user);
// Passing params is optional
$param = [
];
$sessions = $analytics->userActivity->search($userActivityRequest, $param);
More info regarding this method can be found here
More Ways
Python
Java
*Please Consider suggesting edits to improve this answer.
Is it possible to extract data (to google cloud storage) from a shared dataset (where I have only have view permissions) using the client APIs (python)?
I can do this manually using the web browser, but cannot get it to work using the APIs.
I have created a project (MyProject) and a service account for MyProject to use as credentials when creating the service using the API. This account has view permissions on a shared dataset (MySharedDataset) and write permissions on my google cloud storage bucket. If I attempt to run a job in my own project to extract data from the shared project:
job_data = {
'jobReference': {
'projectId': myProjectId,
'jobId': str(uuid.uuid4())
},
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
I get the error:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Value 'myProjectId' in content does not agree with value
sharedProjectId'. This can happen when a value set through a parameter
is inconsistent with a value set in the request.">
Using the sharedProjectId in both the jobReference and sourceTable I get:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Access Denied: Job myJobId: The user myServiceAccountEmail
does not have permission to run a job in project sharedProjectId">
Using myProjectId for both the job immediately comes back with a status of 'DONE' and with no errors, but nothing has been exported. My GCS bucket is empty.
If this is indeed not possible using the API, is there another method/tool that can be used to automate the extraction of data from a shared dataset?
* UPDATE *
This works fine using the API explorer running under my GA login. In my code I use the following method:
service.jobs().insert(projectId=myProjectId, body=job_data).execute()
and removed the jobReference object containing the projectId
job_data = {
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
but this returns the error
Access Denied: Table sharedProjectId:sharedDatasetId.sharedTableId: The user 'serviceAccountEmail' does not have permission to export a table in
dataset sharedProjectId:sharedDatasetId
My service account now is an owner on the shared dataset and has edit permissions on MyProject, where else do permissions need to be set or is it possible to use the python API using my GA login credentials rather than the service account?
* UPDATE *
Finally got it to work. How? Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)
After trying to reproduce the issue, I was running into the parse errors.
I did how ever play around with the API on the Developer Console [2] and it worked.
What I did notice is that the request code below had a different format than the documentation on the website as it has single quotes instead of double quotes.
Here is the code that I ran to get it to work.
{
'configuration': {
'extract': {
'sourceTable': {
'projectId': "sharedProjectID",
'datasetId': "sharedDataSetID",
'tableId': "sharedTableID"
},
'destinationUri': "gs://myBucket/myFile.csv"
}
}
}
HTTP Request
POST https://www.googleapis.com/bigquery/v2/projects/myProjectId/jobs
If you are still running into problems, you can try the you can try the jobs.insert API on the website [2] or try the bq command tool [3].
The following command can do the same thing:
bq extract sharedProjectId:sharedDataSetId.sharedTableId gs://myBucket/myFile.csv
Hope this helps.
[2] https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert
[3] https://cloud.google.com/bigquery/bq-command-line-tool
Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)
I am facing a problem while fetching degree connection between two LinkedIn users. I am sending a request at
https://api.linkedin.com/v1/people::(~,id=<other person's linkedin id>):(relation-to-viewer:(distance))?format=json&oauth2_access_token=<user's access token>.
Sometimes I get a correct response:
{
"_total": 2,
"values": [
{
"_key": "~",
"relationToViewer": {"distance": 0}
},
{
"_key": "id=x1XPVjdXkb",
"relationToViewer": {"distance": 2}
}
]
}
while most of the time I get an erroneous response:
{
"_total": 1,
"values": [{
"_key": "~",
"relationToViewer": {"distance": 0}
}]
}
I have gone through LinkdIn api's profile fields and I believe that I am using the api correctly. I am not sure what's wrong here. Please help.
After posting it on LinkedIn forum, I got the response
The behavior you're seeing where you only get yourself back from your
call falls in line with what I'd expect to see if the member ID you're
asking for isn't legitimate. If the member ID exists, but isn't in ~'s
network, you should get a -1 distance back, not nothing at all, as you
are seeing. However if you put in a completely invalid member ID, only
information about ~ will be returned from the call.
This was indeed the problem. The client on Android and the client on iOS had different API keys and both were using the same backend to access the degree connection. By using the same API key for both the clients resolved the issue.