I am sending some events to mixpanel from within a cloud function, using the Python SDK. I don't want the users' location to be set to the location of the cloud server. I have read the Mixpanel article referencing this, but the documentation only shows how to ignore IP for a people_set call, using the meta argument. I assumed the same logic would translate to the track call, as it also has the meta argument in its documentation.
After testing, the people_set call is not taking the server location, but the track call is still taking the server location. Does anyone have any ideas why this might be, or how to correctly go about this task for a track() call? Below are the code snippets for the two calls:
mp_eu.people_set(user_id, user_data,
meta={'$ignore_time': True, '$ip': 0})
mp_eu.track(user_id, 'event_name', event_data,
meta={'$ignore_time': True, '$ip': 0})
You should add "ip" to "properties".
properties["ip"] = ip
mp_eu.track(user_id, 'event_name', properties)
check this.
https://help.mixpanel.com/hc/en-us/articles/115004499343
Related
I want to programatically get all the actions a user is allowed to do across aws services.
I've tried to fiddle with simulate_principal_policy but it seems this method expects a list of all actions, and I don't want to maintain a hard-coded list.
I also tried to call it with iam:* for example and got a generic 'implicitDeny' response so I know the user is not permitted all the actions but I require a higher granularity of actions.
Any ideas as to how do I get the action list dynamically?
Thanks!
To start with, there is no programmatic way to retrieve all possible actions (regardless of whether they are permitted to use an action).
You would need to construct a list of possible actions before checking the security. As an example, the boto3 SDK for Python contains an internal list of commands that it uses to validate commands before sending them to AWS.
Once you have a particular action, you could use Access the Policy Simulator API to validate whether a given user would be allowed to make a particular API call. This is much easier than attempting to parse the various Allow and Deny permissions associated with a given user.
However, a call might be denied based upon the specific parameters of the call. For example, a user might have permissions to terminate any Amazon EC2 instance that has a particular tag, but cannot terminate all instances. To correctly test this, an InstanceId would need to be provided to the simulation.
Also, permissions might be restricted by IP Address and even Time of Day. Thus, while a user would have permission to call an Action, where and when they do it will have an impact on whether the Action is permitted.
Bottom line: It ain't easy! AWS will validate permissions at the time of the call. Use the Policy Simulator to obtain similar validation results.
I am surprised no one has answered this question correctly. Here is code that uses boto3 that addresses the OP's question directly:
import boto3
session = boto3.Session('us-east-1')
for service in session.get_available_services ():
service_client = session.client (service)
print (service)
print (service_client.meta.service_model.operation_names)
IAM, however, is a special case as it won't be listed in the get_available_services() call above:
IAM = session.client ('iam')
print ('iam')
print (IAM.meta.service_model.operation_names)
I'm trying to change the "spamModerationLevel" parameter on all groups (~375 groups) on my company's google mail, and am struggling with the script i use.
I have managed to get the list of groups with a script inspired from this doc.
What I can't do, however, is update the required parameter.
body={'spamModerationLevel': 'ALLOW'}
editResult=service.groups().update(groupKey=*groupID*, body=body).execute()
This request doesn't return any error, but the moderation level isn't updated.
If I change the body of the request to something like
body={'description': 'test2'}
The request runs fine, and the group's description is updated.
Is there anything I missed? Using the API Explorer here, I can change any parameter I want, so I assume I should be able to do so in a script.
I'm using a queue trigger to pass in some data about a job that I want to run with Azure Functions(I'm using python). Part of the data is the name of a file that I want to pull from blob storage. Because of this, declaring a file path/name in an input binding doesn't seem like the right direction, since the function won't have the file name until it gets the queue trigger.
One approach I've tried is to use the azure-storage sdk, but I'm unsure of how to handle authentication from within the Azure Function.
Is there another way to approach this?
In Function.json, The blob input binding can refer to properties from the queue payload. The queue payload needs to be a JSON object
Since this is function.json, it works for all languages.
See official docs at https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings
For example, in you function.json,
{
"name": "imageSmall",
"type": "blob",
"path": "container/{filename}",
}
And if your queue message payload is:
{
"filename" : "myfilename"
}
Then the {filename} token in the blob's path expression will get substituted.
Typically, you store connection strings / account keys in App Settings of the Function App, and then read them by accessing environment variables. I haven't used python in Azure, but I believe that looks like
connection = open(os.environ['ConnectionString']).read()
I've found one example of python function which does what you ask for: queue trigger + blob operation.
Storing secrets can (also) be done using App Settings.
In Azure, go to your Azure Functions App Service, Then click "Application Settings". Then, scroll down to the "App Settings" list. This list consists of Key-Value pairs. Add your key, for example MY_CON_STR and the actual connection string as the value.
Don't forget to click save at this point
Now, in your application (your Function for this example), you can load the stored value using its key. For example, in python, you can use:
os.environ['MY_CON_STR']
Note that since the setting isn't saved locally, you have to execute it from within Azure. Unfortunately, Azure Functions applications do not contain a web.config file.
I'm trying to figure out an effective way to test how my server handles webhooks from Stripe. I'm setting up a system to add multiple subscriptions to a customer's credit card, which is described on Stripe's website:
https://support.stripe.com/questions/can-customers-have-multiple-subscriptions
The issue I'm having is figuring out how to effectively test that my server is executing the scripts correctly (i.e., adding the correct subscriptions to the invoice, recording the events in my database, etc.). I'm not too concerned about automating the test right now, I'm just struggling to effectively run any good test on the script. Has anyone done this with Django previously? What resources and tools did you use to run these tests?
Thanks!
I did not use any tools to run the tests. Impact the stripe has a FULL API REFERENCE which display the information you have send to them and they also display the error. Stripe is very easy to setup, cheap, and have full details in documentation.
What I did is?
First I create a stripe account. In that account, they will give you:
TEST_SECRET_KEY: use for sending payment and information in stripe (for testing)
TEST_PUBS_KEY: identifies your website when communicating with Stripe (for testing)
LIVE_SECRET_KEY: use for sending payment and information in stripe (for live)
LIVE_PUBS_KEY: identifies your website when communicating with Stripe (for live)
API_VERSION: "2012-11-07" //this is the version for testing only
When you login you will see Documentation at the top. Click the documentation and they will give you step by step tutorial on how to create a form, how to create subscription, how to handle errors and many more.
To check if your script is executing and connecting to stripe. Click FULL API REFERENCE then choose Python. In that page you will see the information you have send and error that you have encountered.
What I really like is, if the Stripe detect an error the system will point out that and give you a solution. The solution is in the left side and checking the information send is on the right side.
Stripe is divided into two worlds: the test mode and the live. In test mode, you can perform creating new customer, add new invoices, set up your subscription, and many more. What ever you do in test mode, is the same when your Stripe is live.
I really love that stripe provides the logs for the web hooks, however, it is difficult to view the error responses from them, so I set up a script using the Requests library. First, I went to the Stripe dashboard and copied one of the requests they were sending.
Events & Webhooks --> click on one of the requests --> copy the entire request
import requests
data = """ PASTE COPIED JSON REQUEST HERE """
# insert the appropriate url/endpoint below
res = requests.post("http://localhost:8000/stripe_hook/", data=data).text
output = open("hook_result.html", "w")
output.write(res)
output.close()
Now I could open hook_result.html and see any django errors that may have come up (given DEBUG=True in django).
In django-stripe-payments I have a test suite that while far from comprehensive is meant to be a start at getting there. What I do is just copy a real webhook's data, scrub it for sensitive data and add it as a data to the test.
testing stripe webhooks is a pain. I don't use Django, so my answer will be more general.
My php webhook handler parses the webhook data and dispatches handler functions accordingly. In my handler class, I set up class properties with legitimate data for all the ids that the test webhooks mangles. Then I have a condition in each of my handler functions that tests for livemode. If false, I replace the mangled ids with legit test ids.
I also have another class property called $fakeLiveMode, which I set to true when I'm testing. This allows me to force the code to process as though in live mode.
So, for example, when testing the customer.subscription.updated event, the plan id and customer id get botched. So in that handler I would do this:
if ($event->livemode === true || $this->fakeLivemode)
{
if ($this->fakeLivemode)
{
// override botched data returned by test webhook
$event->data->object->plan->id = $this->testPlanId;
$event->data->object->customer = $this->testCustomerId;
}
// process webhook
}
Does that help?
I've a Flask application, served with Nginx+WSGI (FastCGI & Gevent) and use standard Flask sessions. I do not use the session.permanent=True or any other extra option, but simply set SECRET_KEY in the default configuration.
I do not save any (key,value) pairs in the session, and only rely on the SID = session['_id'] entry to identify a returning user. I use the following code the read the SID:
#page.route ('/')
def main (page='home', template='index.html'):
if not request.args.get ('silent', False):
print >> sys.stderr, "Session ID: %r" % session['_id']
I made the following observations:
For same IP addresses, but different browsers I get different SIDs - that's expected;
For different IPs & same browser I again have different SIDs - expected;
For same IP address with same browser I get same SID - also expected;
Now, point (3) is interesting because even if a delete the corresponding cookie the SID remains constant! To some extent even that might be understandable, but actually I was expecting the SID to change between different cookies. But the only difference I see is that
session.new is True
for the first request immediately after the deletion of the cookie. Even that is very much expected; but given these facts I face the following problems:
Does this mean that for different users sitting behind the same IP (with the same browser configuration) my back-end will mistake them for the same user?
If point (1) is not the case, the current behavior of these "sticky" sessions is actually quite pleasant, since this avoids the situation where my users might loose there data just because they deleted the corresponding cookie.
They can still save the day, by revisiting the site from the same network with the same browser. I like that, but only if point (1) is not the case.
I assume point (1) will actually bite me, would the conclusion actually be to save a token in the session and hence accept the fate that the user can blow himself up, by simply deleting his cookie?
Or is there a way to tell Flask to give different SIDs for each fresh cookie?
Actually, this question arouse since I used a load impact service, which was simulating different users (on the same IP) but my back-end kept seeing them as a single user since the corresponding SIDs were all the same.
The application is available for tests at http://webed.blackhan.ch (which upon release will move the https://notex.ch [a browser based text editor]). Thank you for your answers.
It looks like you're using the Flask-Login extension. Here's the code that generates the id token:
def _create_identifier():
base = unicode("%s|%s" % (request.remote_addr,
request.headers.get("User-Agent")), 'utf8', errors='replace')
hsh = md5()
hsh.update(base.encode("utf8"))
return hsh.digest()
It's basically just md5(ip_address + user_agent).
Flask uses Werkzeug's secure cookies to store this identifier. Secure cookies are (as their name suggests) secure:
This module implements a cookie that is not alterable from the client because it adds a checksum the server checks for. You can use it as session replacement if all you have is a user id or something to mark a logged in user.
session['_id'] is not an actual session identifier. It's just a value used by Flask-Login to implement Session Protection.
Standard Flask sessions do not have an SID - as it would serve no purpose since the actual content of the session is stored in the cookie itself. Also see this.
it's now 2022, and Flask-Session does support session.sid to get a generated UUID that looks something like this:
print(session.sid)
>>> f9c792fa-70e0-46e3-b84a-3a11813468ce
From the docs (https://flasksession.readthedocs.io/en/latest/)
sid
Session id, internally we use uuid.uuid4() to generate one session id. You can access it with session.sid.