Scene:
I am using Azure DevOps pipelines as a security separator, so that my front end is not directly accessing my AKS.
(Above is a business requirement I am not able to avoid or change in any way)
What I got so far:
I am able to put together a html post body with the information that I will be getting from my front end, and I am able to understand it and parse it out as JSON inside the Azure DevOps Pipeline. (using Python)
Issue:
I need to must be able to iterate through each of the object in my JSON and execute actions as indicated.
JSON example:
{
"actions": [
{
"action": "action(0)",
"config": {
"actionType": "Start"
"stage": "test",
"region": "North",
"version": "v756"
"customer": "Hans"
}
},
{
"action": "action(1)",
"config": {
"actionType": "Stop"
"stage": "test",
"region": "East",
"version": "v752"
"customer": "Christian"
}
},
{
"action": "action(2)",
"config": {
"actionType": "Delete"
"stage": "prod",
"region": "South",
"version": "v759"
"customer": "Anderson"
}
}
]
}
** Edited (malformed JSON example)
TypeScript that generates my testing data
const value = {
actionType: "Create",
stage: "test",
region: "North",
version: "v753",
customer: "Hans"
}
interface Action {
readonly action: string;
readonly config: typeof value;
}
const actions: Array<Action> = [];
for (let i = 0; i < 3; i++) actions.push({
action: `action(${i})`,
config: value
})
const result = JSON.stringify({ actions });
const body = {
templateParameters: {
actions: {
value: result
}
}
}
** Edited: Added the TypeScript
Current pipeline
name: Test-Deploy-$(Date:yyyyMMdd)-$(Date:hh-mm)$(Rev:.r)
pool:
vmImage: ubuntu-latest
parameters:
- name: actions
type: object
default: []
stages:
- stage: test_stage
displayName: Test stage
jobs:
- job: test
displayName: Test the values
steps:
- ${{ each action in parameters.actions}}:
- task: PowerShell#2
displayName: Print out the "Action"-variable
inputs:
targetType: 'inline'
script: '"${{action}}"'
** Edited: Added the pipeline as it stands
My current thinking:
I would like to be able to iterate through the actions in a "for-each" fashion. Like in this pseudo pipeline script below:
- ${{ each action in $(actions) }}:
But I am not able to come up with exactly how that would be done in Azure DevOps Pipelines, so I am hoping that someone here can figure it out with me :)
${{ each }} is a template expression. They're intended for use with parameters rather than variables, because they are evaluated at compile time (so $(variablename) can't have a value yet).
Now, I've not actually tried this myself, but the Runs API has a JSON request body element called templateParameters, which takes an object. What you could try is something like this:
Add a parameter to your pipeline, like:
parameters:
- name: actions
type: object
default: []
When submitting the Runs API call to run your pipeline, include something like:
{
"previewRun": true,
"templateParameters": {
"actions": {
... your JSON actions content as generated in your question ...
},
... other parameters to your JSON run request
}
In your pipeline, reference
- ${{ each action in parameters.actions }}:
The previewRun parameter will cause Azure Pipelines to not run the pipeline, but to return the compiled template for debugging and validation purposes. Remove it when you're ready to execute it.
Also, you'll likely need to do some experimenting with templateParameters to get something acceptable to the pipeline, like
declaring it as an array "templateParameters": [ { "actions": [ your actions ] } ] (as I said, I haven't actually done this, but the documentation suggests this might be a good path to explore).
Related
I feel like it must be possible, but I've yet to find an answer.
I navigate here inside of my Function App:
Then click the drop down arrow and select Application Insight Logs
As you can see in that picture, there's a log with an [Information] tag. I thought maybe I could do something like this in my Azure Function that's running my python script:
import logging
logging.info("Hello?")
However, I'm not able to get messages to show up in those logs. How do I actually achieve this? If there's a different place where logs created with logging.info() show up, I'd love to know that as well.
host.json file:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"logLevel": {
"default": "Information",
"Host.Results": "Information",
"Function": "Information",
"Host.Aggregator": "Information"
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
"extensions": {
"queues": {
"batchSize": 2,
"maxDequeueCount": 2
}
},
"functionTimeout": "00:45:00"
}
I believe, there is no different place to write log info, but we need to change the log levels accordingly in host.json for getting different types of logs.
I tried to log the information level logs in this workaround.
In VS Code, Created Azure Functions - Python Stack.
Added this code logging.info(f" Calling Activity Function") in Activity Function Code like below:
This is the host.json code by default:
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
}
After running this durable function, it is logging the information level:
Please refer to this workaround where I had given information about logging levels and also optimization of application insights logs on Azure Python Function.
Updated Answer:
I am looking to resolve issue with trying to apply to the vm I am creating an identity using python sdk. The code:
print("Creating VM " + resource_name)
compute_client.virtual_machines.begin_create_or_update(
resource_group_name,
resource_name,
{
"location": "eastus",
"storage_profile": {
"image_reference": {
# Image ID can be retrieved from `az sig image-version show -g $RG -r $SIG -i $IMAGE_DEFINITION -e $VERSION --query id -o tsv`
"id": "/subscriptions/..image id"
}
},
"hardware_profile": {
"vm_size": "Standard_F8s_v2"
},
"os_profile": {
"computer_name": resource_name,
"admin_username": "adminuser",
"admin_password": "somepassword",
"linux_configuration": {
"disable_password_authentication": True,
"ssh": {
"public_keys": [
{
"path": "/home/adminuser/.ssh/authorized_keys",
# Add the public key for a key pair that can get access to SSH to the runners
"key_data": "ssh-rsa …"
}
]
}
}
},
"network_profile": {
"network_interfaces": [
{
"id": nic_result.id
}
]
},
"identity": {
"type": "UserAssigned",
"user_assigned_identities": {
"identity_id": { myidentity }
}
}
}
The last part, identity: I found somewhere on the web, (not sure where), but it is failing with some weird set/get error when I try to use it. The vm will create fine if I comment out the identity: block, but I need the user assigned identity. I spent the better part of today trying to find information on the options for the begin_create_or_update and info on the identity piece, but I have had no luck. I am looking for help on how to apply a user assigned identity with python to the vm I am creating.
The Set and Get error is because you are declaring the identity block in a wrong way.
If you have a existing User Assigned Identity then you can use the identity block as below:
"identity": {
"type": "UserAssigned",
"user_assigned_identities": {
'/subscriptions/948d4068-xxxxxxxxxxxxxxx/resourceGroups/ansumantest/providers/Microsoft.ManagedIdentity/userAssignedIdentities/mi-identity' : {}
}
As you can see, inside the user_assigned_identities it will be :
'User Assigned Identity ResourceID':{}
instead of
"identity_id":{'User Assigned Identity ResourceID'}
Output:
I want to add account, which has some information readable by all users. According to documentation the user needs to have permissions can_get_all_acc_detail. So I'm trying to add those with creating new role:
tx = self.iroha.transaction([
self.iroha.command('CreateRole', role_name='info', permissions=[primitive_pb2.can_get_all_acc_detail])
])
tx = IrohaCrypto.sign_transaction(tx, account_private_key)
net.send_tx(tx)
Unfortunately after sending transaction I see status:
status_name:ENOUGH_SIGNATURES_COLLECTED, status_code:9, error_code:0(OK)
But then it is taking 5 minutes until timeout.
I've notices that transaction json has different way of embedding permissions than in generic block:
payload {
reduced_payload {
commands {
create_role {
role_name: "info_account"
permissions: can_get_all_acc_detail
}
}
creator_account_id: "admin#example"
created_time: 1589408498074
quorum: 1
}
}
signatures {
public_key: "92f9f9e10ce34905636faff41404913802dfce9cd8c00e7879e8a72085309f4f"
signature: "568b69348aa0e9360ea1293efd895233cb5a211409067776a36e6647b973280d2d0d97a9146144b9894faeca572d240988976f0ed224c858664e76416a138901"
}
In compare in genesis.block it is:
{
"createRole": {
"roleName": "money_creator",
"permissions": [
"can_add_asset_qty",
"can_create_asset",
"can_receive",
"can_transfer"
]
}
},
I'm using iroha version 1.1.3 (but also tested on 1.1.1), python iroha sdh version is 0.0.5.5.
does the account you used to execute the 'Create Role' command have the "can_create_role" permission?
So I have an application that uses MongoDB as a database. The application makes use of a few collections.
When and how should I go about defining the "schema" of the database which includes setting up all the collections as well as indexes needed?
AFAIK, you are unable to define empty collections in MongoDB (correct me if I am wrong, if I can do this it will basically answer this question). Should I insert a dummy value for each collection and use that to setup all my indexes?
What is the best practice for this?
You don't create collections in MongoDB.
You just start using them immediately whether they “exist” or not.
Now to defining the “schema”. As I said, you just start using a collection, so, if you need to ensure an index, just go ahead and do this. No collection creation. Any collection will effectively be created when you first modify it (creating an index counts).
> db.no_such_collection.getIndices()
[ ]
> db.no_such_collection.ensureIndex({whatever: 1})
> db.no_such_collection.getIndices()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.no_such_collection",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"whatever" : 1
},
"ns" : "test.no_such_collection",
"name" : "whatever_1"
}
]
Create empty collection
This is how you could create empty collection in MongoDB using build in interactive terminal:
db.createCollection('someName'); // create empty collection
Just you don't really have to, because as someone pointed before, they will get created in real time once you start to interact with the database.
MongoDB is schema-less end of story, but ...
You could create your own class that interacts with mongo Database. In that class you could define rules that have to fulfilled before it can insert data to mongo collection, otherwise throw custom exception.
Or if you using node.js server-side you could install mongoose node package which allows you to interact with database in OOP style (Why bother to reinvent the wheel, right?).
Mongoose provides a straight-forward, schema-based solution to model
your application data. It includes built-in type casting, validation,
query building, business logic hooks and more, out of the box.
docs: mongoose NPM installation and basic usage
https://www.npmjs.com/package/mongoose
mongoose full documentation http://mongoosejs.com
Mongoose use example (defining schema and inserting data)
var personSchema = new Schema({
name: { type: String, default: 'anonymous' },
age: { type: Number, min: 18, index: true },
bio: { type: String, match: /[a-zA-Z ]/ },
date: { type: Date, default: Date.now },
});
var personModel = mongoose.model('Person', personSchema);
var comment1 = new personModel({
name: 'Witkor',
age: '29',
bio: 'Description',
});
comment1.save(function (err, comment) {
if (err) console.log(err);
else console.log('fallowing comment was saved:', comment);
});
Wrapping up ...
Being able to set schema along with restriction in our code doesn't change the fact that MongoDB itself is schema-less which in some scenarios is actually an advantage. This way if you ever decide to make changes to schema, but you don't bother about backward compatibility, just edit schema in your script, and you are done. This is the basic idea behind the MongoDB to be able to store different sets of data in each document with in the same collection. However, some restriction in code base logic are always desirable.
As of version 3.2, mongodb now provides schema validation at the collection level:
https://docs.mongodb.com/manual/core/schema-validation/
Example for create a schema :
db.createCollection("students", {
validator: {
$jsonSchema: {
bsonType: "object",
required: [ "name", "year", "major", "address" ],
properties: {
name: {
bsonType: "string",
description: "must be a string and is required"
},
year: {
bsonType: "int",
minimum: 2017,
maximum: 3017,
description: "must be an integer in [ 2017, 3017 ] and is required"
},
major: {
enum: [ "Math", "English", "Computer Science", "History", null ],
description: "can only be one of the enum values and is required"
},
gpa: {
bsonType: [ "double" ],
description: "must be a double if the field exists"
},
address: {
bsonType: "object",
required: [ "city" ],
properties: {
street: {
bsonType: "string",
description: "must be a string if the field exists"
},
city: {
bsonType: "string",
description: "must be a string and is required"
}
}
}
}
}
}
})
const mongoose = require("mongoose");
const RegisterSchema = mongoose.Schema({
username: {
type: String,
unique: true,
requied: true,
},
email: {
type: String,
unique: true,
requied: true,
},
password: {
type: String,
requied: true,
},
date: {
type: Date,
default: Date.now,
},
});
exports.module = Register = mongoose.model("Register", RegisterSchema);
I watched this tutorial.
You have already been taught that MongoDB is schemaless. However, in practice, we have a kind of "schema", and that is the object space of the object, whose relations a MongoDB database represents. With the ceveat that Ruby is my go-to language, and that I make no claims about exhaustiveness of this answer, I recommend to try two pieces of software:
1. ActiveRecord (part of Rails)
2. Mongoid (standalone MongoDB "schema", or rather, object persistence system in Ruby)
Expect a learning curve, though. I hope that others will point you to solutions in other great languages outside my expertise, such as Python.
1.Install mongoose:
npm install mongoose
2. Set-up connection string and call-backs
// getting-started.js
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/test');
//call-backs
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function() {
// we're connected!
});
3. Write your schema
var kittySchema = new mongoose.Schema({
name: String
});
4. Model the schema
var Kitten = mongoose.model('Kitten', kittySchema);
5. Create a document
var silence = new Kitten({ name: 'Tom' });
console.log(silence.name); // Prints 'Tom' to console
// NOTE: methods must be added to the schema before compiling it with mongoose.model()
kittySchema.methods.speak = function () {
var greeting = this.name
? "Meow name is " + this.name
: "I don't have a name";
console.log(greeting);
}
enter code here
var Kitten = mongoose.model('Kitten', kittySchema);
Functions added to the methods property of a schema get compiled into the Model prototype and exposed on each document instance:
var fluffy = new Kitten({ name: 'fluffy' });
fluffy.speak(); // "Meow name is fluffy"
I have a simple step function launching a lambda and I am looking for a way to pass parameters (event / context) to each of several consequent tasks. My step function looks like this:
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Parameters": {
"TableName": "table_example"
},
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"End": true
}
}
}
In the lambda written with Python I am using a simple handler which is:
def lambda_handler(event, context):
#...
The event and context look like this (checking the logs):
START RequestId: f58140b8-9f04-47d7-9285-510b0357b4c2 Version: $LATEST
I cannot find a way to pass parameters to this lambda and to use them in the script. Essentially, what I am trying to do is to run the same lambda passing a few different values as a parameter.
Could anyone please point me in the right direction?
Based on what you said: "looking for a a way to pass parameters
(event / context) to each of several consequent tasks" I assumed that
you want to pass non-static values to lambdas.
There are two ways to pass arguments through state machine. Via InputPath and Parameters. For differences please look here.
If you do not have any static values that you want to pass to lambda, I would do the following. Passed all parameters to step function in json format.
Input JSON for state machine
{
"foo": 123,
"bar": ["a", "b", "c"],
"car": {
"cdr": true
}
"TableName": "table_example"
}
In step function you are passing entire JSON explicitly to lambda using "InputPath": "$", except for a first step where it is passed implicitly. For more about $ path syntax please look here. You also need to take care of task result, with one of multiple approaches using ResultPath. For most of cases the safest solution is to keep task result in special variable "ResultPath": "$.taskresult"
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"Next": "HelloWorld2"
},
"HelloWorld2": {
"Type": "Task",
"InputPath": "$",
"ResultPath": "$.taskresult"
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync_2",
"End": true
}
}
}
Which in lambda became event variable and can be access as python dictionary
def lambda_handler(event, context):
table_example = event["TableName"]
a = event["bar"][0]
cdr_value = event["car"]["cdr"]
# taskresult will not exist as event key
# only on lambda triggered by first state
# in the rest of subsequent states
# it will hold a task result of last executed state
taskresult = event["taskresult"]
With this approach you can use multiple step functions and different lambdas and still keep both of them clean and small by moving all the logic in lambdas.
Also it is easier to debug because all events variables will be the same in all lambdas, so via simple print(event) you can see all parameters needed for entire state machine and what possibly went wrong.
I came across this, apparently, when Resource is set to lambda ARN(for example "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync"), you can't use the Parameters to specify the input and instead the state of the step function is passed(possibly the input to your state function, if there's no step before it).
To pass the function input via Parameters you can specify Resource as "arn:aws:states:::lambda:invoke" and the provide your FunctionName in the Parameters section:
{
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"FunctionName": "YOUR_FUNCTION_NAME",
"Payload": {
"SOMEPAYLOAD": "YOUR PAYLOAD"
}
},
"End": true
}
}
}
You can find the documentation for invoking Lambda functions in here: https://docs.aws.amazon.com/step-functions/latest/dg/connect-lambda.html
You can also potentially use inputPath, or also have elements from your step function state function: https://docs.aws.amazon.com/step-functions/latest/dg/input-output-inputpath-params.html
For some reason, specifying directly the lambda function ARN in Resource doesn't work.
The following workaround is purely with ASL definition, you just create a Pass step before with the parameters, and its output will be used as input of the next step (your step HelloWorld with the lambda call):
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloParams",
"States": {
"HelloParams": {
"Type": "Pass",
"Parameters": {
"TableName": "table_example"
},
"Next": "HelloWorld"
},
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"End": true
}
}
}
there is a workaround in another response that tells you to make the previous step using a lambda function, but it is not needed for simple cases. Context values can also be mapped, for example the current timestamp:
"HelloParams": {
"Type": "Pass",
"Parameters": {
"TableName": "table_example",
"Now": "$$.State.EnteredTime"
},
"Next": "HelloWorld"
},
Also, InputPath and ResultPath can be used to prevent overwriting the values from previous steps. For example:
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloParams",
"States": {
"HelloParams": {
"Type": "Pass",
"Parameters": {
"TableName": "table_example"
},
"ResultPath": "$.hello_prms",
"Next": "HelloWorld"
},
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"InputPath": "$.hello_prms",
"ResultPath": "$.hello_result",
"End": true
}
}
}
that would save the parameters in hello_prms (so that you can reuse them in other steps) and save the result of the execution in hello_result without values from previous steps (in case you add them).
Like Milan mentioned in his comment, you can pass on data to a Lambda function from a Step Function State.
In the Lambda function you'll need to read the event contents.
import json
def lambda_handler(event, context):
TableName = event['TableName']