I'm also posting my question here. Maybe anyone has a pragmatic way to solve this:
Currently I've been messing around with automatically creating users in gitlab-ce, adding existing ssh-keys to them and afterwards, changing their identity to a different provider, in my case atlassian crowd. I tried to set it when doing a post request on user creation by adding { 'identities': [ { 'provider': 'crowd', 'external_uid': 'foobar' } ] } into my request, but while the user is correctly created, the identity setting gets ignored. The request actually looks like the following and is sent against http://localhost/api/v3/users
{
"email": "foo.bar#aol.com",
"password": "aol123aol123",
"username": "foo.bar",
"name": "Foo Bar",
"identities": [
{
"provider": "crowd",
"extern_uid": "fbar"
}
]
}
As I said, the user is created, but not the identity. Directly setting the identity through API, that is, setting a certain identity provide along with an id at the external system, seems not to be implemented. That's why I filed an issue at gitlab.com (https://gitlab.com/gitlab-org/gitlab-ce/issues/27693).
I'm looking now for an alternative to fix this. I'm working on a migration from different technologies and I want to automate the usermanagement first. As there a couple of them, it's not feasible to do this manually.
As a matter of fact, it is possible to set the identity of a user in the admin ui. Inspecting it through the browser when clicking on the create button showed, that it is a POST request against
http://localhost/admin/users/foo.bar/identities
the content is url-encoded:
utf8:✓
authenticity_token:47yRB038sLQQ7bBP4vYGdVcQzg/8js09h5mUkz5vNYSALAjRqIpAFjYube8VxUlEKChNcrjNmx7s0RW8tDWFqC==
identity[provider]:crowd
identity[extern_uid]:fbar
As you can see, it's not an API url, but the ui. What is unknown to me here is the authenticity_token. It's not the private access token of the admin. Is it a session token?
From a technology point I'm using python w/ requests to do all this.
To achieve this, you need sudo support offered by gitlab-ce's API.
Related
In Firebase Console I set up audiences based on various user properties and now am able to send notifications to different user segments via console. Is there a way to do the same by http requests to fcm servers? There should be a trick with "to" field, but I couldn't figure it out.
firebaser here
There is currently no way to send a notification to a user segment programmatically. It can only be done from the Firebase Console as you've found.
We're aware that allowing this through an API would expand the potential for Firebase Notifications a lot. So we're considering adding it to the API. But as usual: no commitment and no timelines, since those tend to change as priorities shift.
This has been a popular request, but unfortunately it is not yet possible. We are looking into this.
Please check Firebase Cloud Messaging announcements for any updates in the future.
You can try with topic subscriptions. It is not perfect solution but the best for me at this time.
{
"to": "/topics/audience1_subscription"
"data" : {
"title" : "Sample title",
"body" : "Sample body"
},
}
Yes. No solid solutions are available as of now but I have a workaround solution for it. Which is not able to handle every scenario but it will get the work done.
For that, you need to figure out the audience within the app and you need to segment them with topics. Then you can send a push notification for that particular topic via API.
Let's take an example.
Send notifications to users who didn't open the app in the last 7 days
Subscribe to a topic name "app-open?date=09-21-2022"
each time user opens the app. Just unsubscribe from the topic of the last app opened and subscribe to a new topic with the current date.
Then you just need to build a topic string based on the current day - 7 to send.
And you can create multiple topics for the same user for different behaviors and use them as topics to send push notifications via API to segmented users.
As there is no limit on topics per user or topics per project. You can create as many as topics you want and use them as your need.
Yes.There is trick with the "to" field as mentioned in below.
web URL is: https://fcm.googleapis.com/fcm/send
Content-Type: application/json
Authorization: key="YOUR_SEVER_KEY"
JSON DATA FORMAT:
{"to": "USER_FIREBASE_TOKEN",
"data": {"message": "This is a Firebase Cloud Messaging Topic Message",}
"notification": {"body": "This is firebase body",}}";
From documentation on https://developers.google.com/vault/guides/exports, I've been able to create, list, and retrieve exports, but I haven't found any way to download the exported data associated with a specific export. Is there any way to download the exported files via the API, or is this only available through the vault UI?
There is a cloudStorageSink key in the export metadata, but trying to use the values provided using the cloud storage API results in a generic permissions issue (403 Error).
Example export metadata response:
{
"status": "COMPLETED",
"cloudStorageSink": {
"files": [
{
"md5Hash": "da5e3979864d71d1e3ac776b618dcf48",
"bucketName": "408d9135-6155-4a43-9d3c-424f124b9474",
"objectName": "a740999b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd7886-fe02-432f-83c-a4b6fd4520a5/Test_Export-1.zip",
"size": "37720"
},
{
"md5Hash": "d345a812e15cdae3b6277a0806668808",
"bucketName": "408d9135-6155-4a43-9d3c-424f124b9474",
"objectName": "a507999b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd6886-fb02-4c2f-813c-a4b6fd4520a5/Test_Export-metadata.xml",
"size": "8943"
},
{
"md5Hash": "21e91e1c60e6c07490faaae30f8154fd",
"bucketName": "408d9135-6155-4a43-9d3c-424f124b9474",
"objectName": "a503959b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd6786-fb02-42f-813c-a4b6fd4520a5/Test_Export-results-count.csv",
"size": "26"
}
]
},
"stats": {
"sizeInBytes": "46689",
"exportedArtifactCount": "7",
"totalArtifactCount": "7"
},
"name": "Test Export",
...
}
There are two approaches that can do the action you require:
The first:
using OAuth 2.0 refresh and access keys however it requires the intervention of the user, acknowledging your app access.
You can find a nice playground supplied by Google and more info here: https://developers.google.com/oauthplayground/.
You will first need to choose your desired API (in your case it is the: https://www.googleapis.com/auth/devstorage.full_controll under the Cloud Storage JSON API v1 section.
Then, you will need to log in with an admin account and click: "Exchange authorization code for tokens" (the fields "Refresh token" and "Access token" will be field automatically).
Lastly, you will need to choose the right URL to perform your request. I suggest using the "List possible operations" to choose the right URL. You will need to choose "Get Object - Retrieve the object" under Cloud Storage API v1 (notice that there are several options with the name -"Get Object", be sure to choose the one under Cloud Storage API v1 and not the one under Cloud Storage JSON API v1). Now just enter your bucket and object name in the appropriate placeholders and click Send the request.
The second:
Programmatically download it using Google client libraries. This is the approach suggested by #darkfolcer however I believe that the documentation provided by Google is insufficient and thus does not really help. If a python example will help, you can find one in the answer to the following question - How to download files from Google Vault export immediately after creating it with Python API?
Once all the exports are created you'll need to wait for them to be completed. You can use https://developers.google.com/vault/reference/rest/v1/matters.exports/list to check the status of every export in a matter. In the response refer to the “exports” array and check the value of “status” for each, any that say "COMPLETED" can be downloaded.
To download a completed export go to the “cloudStorageSink” object of each export and take the "bucketName" and "objectName" value of the first entry in the "files" Array. You’ll need to use the Cloud Storage API and these two values to download the files. This page has code examples for all the popular languages and using the API https://cloud.google.com/storage/docs/downloading-objects#storage-download-object-cpp.
Hope it helps.
The issue you are seeing is because the API works with the principle of least privilege.
The implications for you is that, since your objective is to download the files from the export, you would get the permissions to download only the files, not the whole bucket (even if it contains only those files).
This is why when you request information from the storage bucket, you get the 403 error (permission error). However, you do have permission to download the files inside the bucket. In this way, what you should do is get each object directly, doing requests like this (using the information on the question):
GET https://storage.googleapis.com/storage/v1/b/408d9135-6155-4a43-9d3c-424f124b9474/o/a740999b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd7886-fe02-432f-83c-a4b6fd4520a5/Test_Export-1.zip
So, in short, instead of getting the full bucket, get each individual file generated by the export.
Hope this helps.
I've been testing reading from the Graph API with an app I'm working on for a while. I've been reading events directly from their /{id} endpoints using the Python package. When I attempted this today, however, it didn't work. The response was as follows, when attempted using the Graph API Explorer.
{
"error": {
"message": "Unsupported get request. Object with ID 'XXXXXXXXXXX' does not exist, cannot be loaded due to missing permissions, or does not support this operation. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api",
"type": "GraphMethodException",
"code": 100,
"error_subcode": 33,
"fbtrace_id": "HAli25GZ3N4
"
}
}
The Explorer itself seems to know somehow that the object in question is an event, as the field options it gives in the left sidebar are all specific to Event objects. I'm aware you need to go through App Review to be able to read public Events, but I haven't needed to thus far. What's the issue?
I've also checked the changelogs, that state nothing breaking has occured today in that instance. One thing to note was that I was briefly demoted to Moderator of the Page I'm trying to read from. I've tried using my personal Access Token and that of the Page too.
I am generating new access token using this answer.
Using access token, I can send request to facebook's graph api.
For example, I want to pull details of https://www.facebook.com/custtap. Then, this url
https://graph.facebook.com/v2.5/custtap?fields=name,about,emails&access_token=ACCESS_TOKEN_GOES_HERE
This works fine, I get desired results. Similarly, it works fine for other pages that I don't have any access.
But, this way doesn't work for page(https://www.facebook.com/LatentView).
https://graph.facebook.com/v2.5/LatentView?fields=name,about,emails&access_token=ACCESS_TOKEN_GOES_HERE
This returns
{
"error": {
"message": "Unsupported get request. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api",
"type": "GraphMethodException",
"code": 100,
"fbtrace_id": "C6DYVxvWZ91"
}
}
The page is access-restricted (page settings) in some way – that can f.e. be based on age, location, for alcohol-related content.
Therefor you need to either use a user access token for a user that is allowed to see the page, or a page access token (which requires admin access to the page to get.)
Everything you need to know is documented here: https://developers.facebook.com/docs/facebook-login/access-tokens
I've been using django-social-auth (https://github.com/omab/django-social-auth), with some success - logging in from Twitter and Facebook have been set up without difficulty. However, when I log in from some OpenID providers, I am faced with an empty variable for the username, and the social-auth app allows this (despite me having set SOCIAL_AUTH_DEFAULT_USERNAME).
Whilst if SOCIAL_AUTH_DEFAULT_USERNAME worked properly that might be an interim solution, ideally I'd rather that it was either set automatically from the openID provider. Basically, I'm left with two possible solutions:
1) Make a custom method to extract some of the extra data sent from the openID provider to set the username from that.
2) Force the user to set a username when they first login.
Whilst (2) is less elegant, it ensures that a username has been inserted each time, and also obviates the need to have some postpocessing of the automatic information which may not be in a suitable format for the username.
My question amounts to, how can I go about implementing either of the above! Complete answers are not necessary, but pointers to a method would be much appreciated!
The alternative is to play with django-socialregistration and to see whether that makes life easier!
J
1 is probably the way to go. You can override the get_user_details method of the Backend class for the social providers you're having trouble with and pull the username from there.
It'd look something like this, for example for Facebook:
from social_auth.backends.facebook import FacebookBackend
class CustomFacebookBackend(FacebookBackend):
name = 'facebook'
def get_user_details(self, response):
"""Return user details from Facebook account"""
return {'username': response['name'],
'email': response.get('email', ''),
'first_name': response.get('first_name', ''),
'last_name': response.get('last_name', '')}
The "response" variable is the deserialized data returned from the request to the OAuth provider for the user details, so you can pull whatever you need from that, or inject your own values here. social-auth will take whatever you stick in "username" and use that as the username for the new account.
If you want more control, you can try overriding the backend's "username" method as well.
When overriding the backend, don't forget to add it to AUTHENTICATION_BACKENDS in settings.py. Also, I'm not sure how this works exactly, but I think you need to add something like this to the file with your custom backend in order to get social-auth to link your backend in properly:
BACKENDS = {
'facebook': CustomFacebookAuth,
}