I'm trying to create a monthly subscription with stripe.
I wrote a sample code like that
if event_type == 'checkout.session.completed':
# Payment is successful and the subscription is created.
# You should provision the subscription and save the customer ID to your database.
try:
session = stripe.checkout.Session.retrieve(
event['data']['object'].id, expand=['line_items', 'subscription'])
_current_period_end = session['subscription']['current_period_end']
_current_period_start = session['subscription']['current_period_start']
_product_id = session['subscription']['items']['data'][0]['plan']['product']
_user_id = session['client_reference_id']
_stripe_customer_id = session['customer']
_subscription_id = session["subscription"]['id']
'''
do other things to update user package
'''
except Exception as e:
'''
error log
'''
elif event_type == 'invoice.paid':
if THIS_IS_NOT_FIRST_TIME:
parse_event_data
'''
do other things to extend subscription
'''
I have some questions;
I parsed web hook result from a dict which is returned from stripe.checkout.Session.retrieve object. It seemed a little bit odd to me. What if stripe update his API response and use a different names for dict keys that i used? Is there another way to get these values such as with dot notation maybe (session.get.product_id)?
How can i understand that invoice.paid event not triggered for the first time of subscription?
I want to test renewal process of my monthly subscription. I used stripe trigger invoice.payment_succeeded but i need a real data for my test accounts (with a test customer, subscription, product etc...)
I can update my user package with using CHECKOUT_SESSION_ID
from checkout success url ("success?session_id={CHECKOUT_SESSION_ID}). Should i do that or use checkout.session.completed web hook?
I have just returned HTTP 500 response to every request to my webhook URL to see if STRIPE show an error message to user in checkout page. However, STRIPE just created a successful subscription. In that case, STRIPE will take a payment from my customer, however even if i can not update my customer package on my database. What should i do to prevent this issue? Should i create a scheduled job to sync data between STRIPE and my db?
You have many separate questions that would be better suited for Stripe's support team directly: https://support.stripe.com/contact/email
Now I will try to touch on some of the questions you had starting with the first one.
When you call session = stripe.checkout.Session.retrieve(...) you get back a class that is a Checkout Session. All the properties of the class map to the properties covered in the API Reference for Session. This means you can do session.id which is cs_test_123 or session.created which is a timestamp of the creation date. It's not really different from accessing as a dictionary overall.
You're also asking if those can change and Stripe explains their backwards compatibility policy in details in their docs here. If they were to change a name from created to created_at, they would do it in a new API version for new integrations and it wouldn't impact your code unless you manually changed the API version for your account so that is safe.
For the invoice.paid event, you want to look at the invoice's billing_reason property which would be subscription_create for the first invoice.
You can test all of this easily in Test mode, create a session, start a subscription, etc. You can also simulate cycle changes.
I hope this helps but chatting with their support team is your best bet since those are more integration questions and not coding questions.
Related
Working with Stripe and trying to set some basic info into the metadata field. I have two plans created, a paid and free. The free plan is used when a customer cancels.
Many of these customers have been changed through Stripe Dashboard already so using a webhook will not work.
With this code I am able to get all customers on a specific plan and show that in metadata. The problem is that the .created date gives me the date the customer was created and not the date the plan was changed.
If I change the plan in dashboard by adding a new plan and deleting the old one (from that customer) I can use the time the old plan was unsubscribed. But the option in the dashboard to change plan does something different and there is no unsubscribe.
My app is a connected account creating charges on other stripe accounts that have a dashboard of their own, so just not using the change plan button is not an option.
Here is the code that gets the plan and created date.
canceled=stripe.Subscription.list(
plan='plan_Elm8GW7mwgDj5S',
stripe_account=stripe_keys['acct_num'],
)
for cancel in canceled.auto_paging_iter():
customer_id=cancel.customer
cd=cancel.created
canceled_date=datetime.datetime.fromtimestamp(cd).strftime('%m-%d-%Y')
stripe.Customer.modify(
customer_id,
stripe_account=stripe_keys['acct_num'],
metadata={'Status': 'Canceled',
'Canceled On': canceled_date}
)
thank you!
You can use the API to list events. You can specify the type of event to retrieve just those events, and look at the creation time of the event to know when it occurred.
It sounds like you want to focus on the customer.subscription.created event and then look at those events for subscriptions using the free plan. There are also other customer.subscription.* events for updates, deletions, and trials ending.
I'm using Google App Engine (python) for the backend of a mobile social game. The game uses Twitter integration to allow people to follow relative leaderboards and play against their friends or followers.
By far the most expensive piece of the puzzle is the background (push) task that hits the Twitter API to query for the friends and followers of a given user, and then stores that data within our datastore. I'm trying to optimize that to reduce costs as much as possible.
The Data Model:
There are three main models related to this portion of the app:
User
'''General user info, like scores and stats'''
# key id => randomly generated string that uniquely identifies a user
# along the lines of user_kdsgj326
# (I realize I probably should have just used the integer ID that GAE
# creates, but its too late for that)
AuthAccount
'''Authentication mechanism.
A user may have multiple auth accounts- one for each provider'''
# key id => concatenation of the auth provider and the auth provider's unique
# ID for that user, ie, "tw:555555", where '555555' is their twitter ID
auth_id = ndb.StringProperty(indexed=True) # ie, '555555'
user = ndb.KeyProperty(kind=User, indexed=True)
extra_data = ndb.JsonProperty(indexed=False) # twitter picture url, name, etc.
RelativeUserScore
'''Denormalization for quickly generated relative leaderboards'''
# key id => same as their User id, ie, user_kdsgj326, so that we can quickly
# retrieve the object for each user
follower_ids = ndb.StringProperty(indexed=True, repeated=True)
# misc properties for the user's score, name, etc. needed for leaderboard
I don't think its necessary for this question, but just in case, here is a more detailed discussion that led to this design.
The Task
The background thread receives the twitter authentication data and requests a chunk of friend IDs from the Twitter API, via tweepy. Twitter sends up to 5000 friend IDs by default, and I'd rather not arbitrarily limit that more if I can avoid it (you can only make so many requests to their API per minute).
Once I get the list of the friend IDs, I can easily translate that into "tw:" AuthAccount key IDs, and use get_multi to retrieve the AuthAccounts. Then I remove all of the Null accounts for twitter users not in our system, and get all the user IDs for the twitter friends that are in our system. Those ids are also the keys of the RelativeUserScores, so I use a bunch of transactional_tasklets to add this user's ID to the RelativeUserScore's followers list.
The Optimization Questions
The first thing that happens is a call to Twitter's API. Given that this is required for everything else in the task, I'm assuming I would not get any gains in making this asynchronous, correct? (GAE is already smart enough to use the server for handling other tasks while this one blocks?)
When determining if a twitter friend is playing our game, I currently convert all twitter friend ids to auth account IDs, and retrieve by get_multi. Given that this data is sparse (most twitter friends will most likely not be playing our game), would I be better off with a projection query that just retrieves the user ID directly? Something like...
twitter_friend_ids = twitter_api.friend_ids() # potentially 5000 values
friend_system_ids = AuthAccount\
.query(AuthAccount.auth_id.IN(twitter_friend_ids))\
.fetch(projection=[AuthAccount.user_id])
(I can't remember or find where, but I read this is better because you don't waste time attempting to read model objects that don't exist
Whether I end up using get_multi or a projection query, is there any benefit to breaking up the request into multiple async queries, instead of trying to get / query for potentially 5000 objects at once?
I would organize the task like this:
Make an asynchronous fetch call to the Twitter feed
Use memcache to hold all the AuthAccount->User data:
Request the data from memcache, if it doesn't exist then make a fetch_async() call to the AuthAccount to populate memcache and a local dict
Run each of the twitter IDs through the dict
Here is some sample code:
future = twitter_api.friend_ids() # make this asynchronous
auth_users = memcache.get('auth_users')
if auth_users is None:
auth_accounts = AuthAccount.query()
.fetch(projection=[AuthAccount.auth_id,
AuthAccount.user_id])
auth_users = dict([(a.auth_id, a.user_id) for a in auth_accounts])
memcache.add('auth_users', auth_users, 60)
twitter_friend_ids = future.get_result() # get async twitter results
friend_system_ids = []
for id in twitter_friend_ids:
friend_id = auth_users.get("tw:%s" % id)
if friend_id:
friend_system_ids.append(friend_id)
This is optimized for a relatively smaller number of users and a high rate of requests. Your comments above indicate a higher number of users and a lower rate of requests, so I would only make this change to your code:
twitter_friend_ids = twitter_api.friend_ids() # potentially 5000 values
auth_account_keys = [ndb.Key("AuthAccount", "tw:%s" % id) for id in twitter_friend_ids]
friend_system_ids = filter(None, ndb.get_multi(auth_account_keys))
This will use ndb's built-in memcache to hold data when using get_multi() with keys.
I want to add the 'check username available' functionality on my signup page using AJAX. I have few doubts about the way I should implement it.
With which event should I register my AJAX requests? We can send the
requests when user focus out of the 'username' input field (blur
event) or as he types (keyup event). Which provides better user
experience?
On the server side, a simple way of dealing with requests would be
to query my main 'Accounts' database. But this could lead to a lot
of request hitting my database(even more if we POST using the keyup
event). Should I maintain a separate model for registered usernames
only and use that to get better results?
Is it possible to use Memcache in this case? Initializing cache with
every username as key and updating it as we register users and use a
random key to check if cache is actually initialized or pass the
queries directly to db.
Answers -
Do the check on blur. If you do it on key up, you will be hammering your server with unnecessary queries, annoying the user who is not yet done typing, and likely lag the typing anyway.
If your Account entity is very large, you may want to create a separate AccountName entity, and create a matching such entity whenever you create a real Account (but this is probably an unnecessary optimization). When you create the Account (or AccountName), be sure to assign id=name when you create it. Then you can do an AccountName.get_by_id(name) to quickly see if the AccountName has already been assigned, and it will automatically pull it from memcache if it has been recently dealt with.
By default, GAE NDB will automatically populate memcache for you when you put or get entities. If you follow my advice in step 2, things will be very fast and you won't have to mess around with pre-populating memcache.
If you are concerned about 2 people simultaneously requesting the same user name, put your create method in a transaction:
#classmethod
#ndb.transactional()
def create_account(cls, name, other_params):
acct = Account.get_by_id(name)
if not acct:
acct = Account(id=name, other param assigns)
acct.put()
I would recommend the blur event of the username field, combined with some sort of inline error/warning display.
I would also suggest maintaining a memcache of registered usernames, to reduce DB hits and improve user experience - although probably not populate this with a warm-up, but instead only when requests are made. This is sometimes called a "Repository" pattern.
BUT, you can only populate the cache with USED usernames - you should not store the "available" usernames here (or if you do, use a much lower timeout).
You should always check directly against the DB/Datastore when actually performing the registration. And ideally in some sort of transactional method so that you don't have race conditions with multiple people registering.
BUT, all of this work is dependant on several things, including how busy your app is and what data storage tech you are using!
How can I get the details of users for whom the state is intended?
In my case I have a 4-state workflow with private as initial state, then pending, reviewed and published. When a contributor adds a page its state is private and he can request for review. The reviewer then gets a notification email on the transition (I have added a Python script to send the mail).
Since Products.DCWorkflow has 5 default variables (action, actor, time, comments and review_history), I'm able to get the user who has requested the transition by using the actor variable:
actorid = wf_tool.getInfoFor(obj, 'actor')
actor = context.portal_membership.getMemberById(actorid)
My problem is: how to get the details of the user who is going to review?
PS: my script works on status_change object of workflow.
You can't get the name of the person who "is going to review" - it's not fixed until someone reviews. In a default setup, you could find the list of members of the Reviewers group through the Groups tool, and know who is authorized to review, but that's not the same thing.
I know sure is it me or everyone, I have a following code
http://api.twitter.com/1/statuses/user_timeline.xml?screen_name=barbara_volkwyn
http://api.twitter.com/1/statuses/user_timeline.xml?user_id=248623669
Apparently according to Twitter api, user with screen_name = "barbara_volkwyn" has the user id = 248623669, however, when I run the above API call I get totally different result, one thing that's even weirder is if I try to run the second API call, the users object contain in the returned result is not even the same user.
I wonder anyone has the same problem, feel free to give it a try.
Regards,
Andy.
your userID of barbara_volkwyn isn't valid. It should be: 264882189
You can fetch userID's trough the api or with https://tweeterid.com/
The user_ids reported by the Search API aren't the same as the user_ids used in the Twitter REST API -- unsure if that's where you found the user_id 248623669 or not though.
A timeline contains tweets which in turn contain embedded (but cached) user objects, usually reflecting the state of the user at the time the Tweet was published. Sometimes users change their screen_names, so a user by the name of #barbara_volkwyn might be user_id 1234 one day and user_id 5678 the next day, while the tweets that belonged to user_id 1234 will always belong to user_id 1234, regardless of the screen_name.
The user_id for #babara_volkwyn according to the REST API is 264882189. It's entirely possible that someone held the same screen name but a different user_id at another time. The only way to ever be certain about the identity of a Twitter user is to refer to them by their REST API user_id -- screen_names are transitory and can be modified by the end-user at any time.
As I mentioned, cached user objects used within statuses can become stale -- the most reliable source for up-to-date information about a single user account is the user/show API method. The most reliable source for up-to-date information on recent Tweets by an account is the statuses/user_timeline method.
The embedded objects work for most scenarios, but if you're looking for maximum accuracy, the distinct resources are best.
Thanks, Taylor.