First I would say I'm new to Azure.
Most of my cloud experience comes from AWS.
I'm using IoT Hub with a connected device that sends a message every 1 min.
So far what I did what according to this guide from the Microsoft team:
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-live-data-visualization-in-web-apps
Now, I wanted to create something like Lambda function in AWS, and from what I understand in Azure they called it Azure Functions. I wanted to create a function that gets triggered every time a new message from my device has been received, do something (let's say add 1) and then send it back (so I can pull the 'new' data to my backend).
So far what I did was to create a new "Azure Function" (which I guess it's like a container to functions?)
And then I try to create a new function by click 'Add new' and click on the 'IoT Hub (Event Hub)' template. But when I get to my code and try to test it I get a 404 response. Do I need to create something else? Do I need to create a new 'event' in my IoT Hub? Do I need to create a new 'Event Hub'?
Thanks!
P.s
I try to google it but must of the answers were with the old portal or in C#, I'm using Node and Python.
This scenario is covered in this sample. The sample is in JavaScript. It writes the messages to a database, but you can change this part if you want.
To answer some of your other questions:
IoTHub comes with a built-in Event Hub, so no need to create anything else! Your Azure Function will use an Event Hub trigger to subscribe to events coming from IoT Hub. By default, every event that a device sends to IoT Hub will end up on that endpoint, so to 'create' a new event, use the device SDK (on a device or on your machine) to send a message to IoT Hub.
You mentioned 'sending it back', but in most cases you don't have to respond to IoT Hub messages. You can for instance store the message in a database and build a web application that reads from that database. You could also get real-time updates in your web application, but that's outside the scope of your question.
I have tried to answer your queries below:-
I wanted to create a function that gets triggered every time a new
message from my device has been received, do something (let's say add 1)
and then send it back (so I can pull the 'new' data to my backend).
If you mean sending the data back to IoTHub, that doesn't seem logical to me as the manipulated data is something not sent by device. I would rather treat my Azure function as the backend and save/send the data in some persistent store or a message broker where it can be accessed by other consumer(s).
So far what I did was to create a new "Azure Function" (which I guess
it's like a container to functions?) And then I try to create a new
function by click 'Add new' and click on the 'IoT Hub (Event Hub)'
template. But when I get to my code and try to test it I get a 404
response. Do I need to create something else?
There are couple of ways by which you can create the Azure function with Built-in endpoints that is compatible with Event Hub as the trigger. Check below image. Relevant information about Built-in endpoints can be found here.
Do I need to create a new 'event' in my IoT Hub?
Not sure exactly what you mean by this. The way the flow work is
Send telemetry messages from device. NodeJS example can be found here.
You need to add message routing for messages arriving at the IoTHub should be received in Built-in endpoints. Check image below for telemetry message routing to Built-in endpoint.
Similarly you can route device twin change events, lifecycle events to Built-in endpoint.
Do I need to create a new 'Event Hub'?
Not required, as the Built-in endpoint is Event Hub compatible. Check documentation here. Unless, you have a specific need as per your business use case a custom Event hub endpoint is not required.
But when I get to my code and try to test it I get a 404 response.
Now, we need to trigger the azure function whenever a new event/message is received on the Built-in Endpoint. You can do this by couple of ways.
Azure portal
Command line
VS code
The main point to be noted above is your azure function binding[trigger] is set correctly in the function.json file. Here is how the trigger looks like.
MyEventHub and myEventHubReadConnectionAppSetting value should be picked from Application settings. Check image below.
I suggest you to go through this page for in depth understanding of how the Event hub trigger works with Azure function.
Once you have all the above steps done, you can open your Azure function app in portal and go to Functions section in the Function app blade. There you can monitor, code & test, check integration for your Azure function.
Related
I have requirement of : - I have azure function service bus topic trigger by using python code, So the service bus topic having one topic and multiple subscription with in it.
I have to add a sqlfilter to the subscription so that the message which I sent right it should only go to that subscription if the filter condition satisfies and triggers the function app
How to add the filter option in python code. I found multiple of reference in c# but I need for python.
public async Task SendMessage(MyPayload payload)
{
string messagePayload = JsonSerializer.Serialize(payload);
ServiceBusMessage message = new ServiceBusMessage(messagePayload);
message.ApplicationProperties.Add("goals", payload.Goals);
try
for sample I have add the code for c# where there are adding application properties in function app code , so which ever subscription satisfy the condition which is goals = payload.Goals the mgs will go to that subscription.
I want to know how can we add the application properties in python azure function app code for service bus topic trigger
Using the python client sdk for Azure Service bus, you can apply SqlFilter and SqlRuleAction before you start processing your messages.
Pseudocode will be like,
servicebus_mgmt_client.create_rule(topicname,sub_name,filtername, filter, action)
send_mesgs_to_topic() #set filter in your message
receive_mesgs() #received mesg will have properties
See the detailed examples here in github.
I'm building a flask server in python with Cloud Run, for a chatbot to call.
Sometimes if user wants to do something with the chatbot, the bot need ask the user to login to a 3rd party server before doing the things.
I have two routes:
Route 1 is "/login", it returns a simple iframe which will open a login page in a 3rd party server, generate a "session_id", and save some info I already get to a global variable dict called "runtimes" with the "session_id" as key, so that I can use it later when visitor successfully logged in.
Route 2 is "/callback/<session_id>". After user successfully login to its account, the 3rd party server will call this route with a token in url parameters. Then I will use the "session_id" to read the saved info from "runtimes", and do later things.
It works well in my local machine. But in Google Cloud Run, because it support multiple instances, sometimes it will trigger a new instance when server calls "callback", so it cannot get the "runtime" because they are in different instances.
I know that I can save the runtimes dict to a database to solve this problem, but it looks too overkill...Just not seem right.
Is there any easy way that I can make the "runtimes" be shared between instances?
The solution here is to use a central point of storage: database, memorystore, firestore,... something out of Cloud Run itself.
You can also try the Cloud Run execution runtime v2 that allow you to mount a network disk, such as Cloud Storage or Filestore. You can imagine to store the session data in a file which has the name of the session ID.
Note: On Cloud Run side, something is cooking, but it's not 100% safe, it will be a best effort. A database backup will be required even with that new feature
I have a python script running continuously as a webjob on Azure. In almost every 3 minutes it generates a new set of data. Once the data is generated we want to send it to UI(angular) in real time.
What could be the ideal approach (fastest) to get this functionality?
The data generated is a json containing 50 key value pairs. I read about signalr, but can I directly use signalr with my python code? Is there any other approach like sockets etc.?
What you need is called WebSocket, this is a protocol which allows back-end servers to push data to connected web clients.
There are implementations of WebSocket for python (a quick search found me this one).
Once you have a WebSocket going, you can create a service in o your angular project to handle the yields from your python service, most likely using observables.
Hopefully this sets you on the right path
Problem: Given N instances launched as part of VMSS, I would like my application code on each azure instance to discover the IP address of the other peer instances. How do I do this?
The overall intent is to cluster the instances so, as to provide active passive HA or keep the configuration in sync.
Seems like there is some support for REST API based querying : https://learn.microsoft.com/en-us/rest/api/virtualmachinescalesets/
Would like to know any other way to do it, i.e. either python SDK or instance meta data URL etc.
The RestAPI you mentioned has a Python SDK, the "azure-mgmt-compute" client
https://learn.microsoft.com/python/api/azure.mgmt.compute.compute.computemanagementclient
One way to do this would be to use instance metadata. Right now instance metadata only shows information about the VM it's running on, e.g.
curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute?api-version=2017-03-01"
{"compute":
{"location":"westcentralus","name":"imdsvmss_0","offer":"UbuntuServer","osType":"Linux","platformFaultDomain":"0","platformUpdateDomain":"0",
"publisher":"Canonical","sku":"16.04-LTS","version":"16.04.201703300","vmId":"e850e4fa-0fcf-423b-9aed-6095228c0bfc","vmSize":"Standard_D1_V2"},
"network":{"interface":[{"ipv4":{"ipaddress":[{"ipaddress":"10.0.0.4","publicip":"52.161.25.104"}],"subnet":[{"address":"10.0.0.0","dnsservers":[],"prefix":"24"}]},
"ipv6":{"ipaddress":[]},"mac":"000D3AF8BECE"}]}}
You could do something like have each VM send the info to a listener on VM#0, or to an external service, or you could combine this with Azure Files, and have each VM output to a common share. There's an Azure template proof of concept here which outputs information from each VM to an Azure File share.. https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-azure-files-linux - every VM has a mountpoint which contains info written by every VM.
I'm trying to figure out an effective way to test how my server handles webhooks from Stripe. I'm setting up a system to add multiple subscriptions to a customer's credit card, which is described on Stripe's website:
https://support.stripe.com/questions/can-customers-have-multiple-subscriptions
The issue I'm having is figuring out how to effectively test that my server is executing the scripts correctly (i.e., adding the correct subscriptions to the invoice, recording the events in my database, etc.). I'm not too concerned about automating the test right now, I'm just struggling to effectively run any good test on the script. Has anyone done this with Django previously? What resources and tools did you use to run these tests?
Thanks!
I did not use any tools to run the tests. Impact the stripe has a FULL API REFERENCE which display the information you have send to them and they also display the error. Stripe is very easy to setup, cheap, and have full details in documentation.
What I did is?
First I create a stripe account. In that account, they will give you:
TEST_SECRET_KEY: use for sending payment and information in stripe (for testing)
TEST_PUBS_KEY: identifies your website when communicating with Stripe (for testing)
LIVE_SECRET_KEY: use for sending payment and information in stripe (for live)
LIVE_PUBS_KEY: identifies your website when communicating with Stripe (for live)
API_VERSION: "2012-11-07" //this is the version for testing only
When you login you will see Documentation at the top. Click the documentation and they will give you step by step tutorial on how to create a form, how to create subscription, how to handle errors and many more.
To check if your script is executing and connecting to stripe. Click FULL API REFERENCE then choose Python. In that page you will see the information you have send and error that you have encountered.
What I really like is, if the Stripe detect an error the system will point out that and give you a solution. The solution is in the left side and checking the information send is on the right side.
Stripe is divided into two worlds: the test mode and the live. In test mode, you can perform creating new customer, add new invoices, set up your subscription, and many more. What ever you do in test mode, is the same when your Stripe is live.
I really love that stripe provides the logs for the web hooks, however, it is difficult to view the error responses from them, so I set up a script using the Requests library. First, I went to the Stripe dashboard and copied one of the requests they were sending.
Events & Webhooks --> click on one of the requests --> copy the entire request
import requests
data = """ PASTE COPIED JSON REQUEST HERE """
# insert the appropriate url/endpoint below
res = requests.post("http://localhost:8000/stripe_hook/", data=data).text
output = open("hook_result.html", "w")
output.write(res)
output.close()
Now I could open hook_result.html and see any django errors that may have come up (given DEBUG=True in django).
In django-stripe-payments I have a test suite that while far from comprehensive is meant to be a start at getting there. What I do is just copy a real webhook's data, scrub it for sensitive data and add it as a data to the test.
testing stripe webhooks is a pain. I don't use Django, so my answer will be more general.
My php webhook handler parses the webhook data and dispatches handler functions accordingly. In my handler class, I set up class properties with legitimate data for all the ids that the test webhooks mangles. Then I have a condition in each of my handler functions that tests for livemode. If false, I replace the mangled ids with legit test ids.
I also have another class property called $fakeLiveMode, which I set to true when I'm testing. This allows me to force the code to process as though in live mode.
So, for example, when testing the customer.subscription.updated event, the plan id and customer id get botched. So in that handler I would do this:
if ($event->livemode === true || $this->fakeLivemode)
{
if ($this->fakeLivemode)
{
// override botched data returned by test webhook
$event->data->object->plan->id = $this->testPlanId;
$event->data->object->customer = $this->testCustomerId;
}
// process webhook
}
Does that help?