Scene:
I am using Azure DevOps pipelines as a security separator, so that my front end is not directly accessing my AKS.
(Above is a business requirement I am not able to avoid or change in any way)
What I got so far:
I am able to put together a html post body with the information that I will be getting from my front end, and I am able to understand it and parse it out as JSON inside the Azure DevOps Pipeline. (using Python)
Issue:
I need to must be able to iterate through each of the object in my JSON and execute actions as indicated.
JSON example:
{
"actions": [
{
"action": "action(0)",
"config": {
"actionType": "Start"
"stage": "test",
"region": "North",
"version": "v756"
"customer": "Hans"
}
},
{
"action": "action(1)",
"config": {
"actionType": "Stop"
"stage": "test",
"region": "East",
"version": "v752"
"customer": "Christian"
}
},
{
"action": "action(2)",
"config": {
"actionType": "Delete"
"stage": "prod",
"region": "South",
"version": "v759"
"customer": "Anderson"
}
}
]
}
** Edited (malformed JSON example)
TypeScript that generates my testing data
const value = {
actionType: "Create",
stage: "test",
region: "North",
version: "v753",
customer: "Hans"
}
interface Action {
readonly action: string;
readonly config: typeof value;
}
const actions: Array<Action> = [];
for (let i = 0; i < 3; i++) actions.push({
action: `action(${i})`,
config: value
})
const result = JSON.stringify({ actions });
const body = {
templateParameters: {
actions: {
value: result
}
}
}
** Edited: Added the TypeScript
Current pipeline
name: Test-Deploy-$(Date:yyyyMMdd)-$(Date:hh-mm)$(Rev:.r)
pool:
vmImage: ubuntu-latest
parameters:
- name: actions
type: object
default: []
stages:
- stage: test_stage
displayName: Test stage
jobs:
- job: test
displayName: Test the values
steps:
- ${{ each action in parameters.actions}}:
- task: PowerShell#2
displayName: Print out the "Action"-variable
inputs:
targetType: 'inline'
script: '"${{action}}"'
** Edited: Added the pipeline as it stands
My current thinking:
I would like to be able to iterate through the actions in a "for-each" fashion. Like in this pseudo pipeline script below:
- ${{ each action in $(actions) }}:
But I am not able to come up with exactly how that would be done in Azure DevOps Pipelines, so I am hoping that someone here can figure it out with me :)
${{ each }} is a template expression. They're intended for use with parameters rather than variables, because they are evaluated at compile time (so $(variablename) can't have a value yet).
Now, I've not actually tried this myself, but the Runs API has a JSON request body element called templateParameters, which takes an object. What you could try is something like this:
Add a parameter to your pipeline, like:
parameters:
- name: actions
type: object
default: []
When submitting the Runs API call to run your pipeline, include something like:
{
"previewRun": true,
"templateParameters": {
"actions": {
... your JSON actions content as generated in your question ...
},
... other parameters to your JSON run request
}
In your pipeline, reference
- ${{ each action in parameters.actions }}:
The previewRun parameter will cause Azure Pipelines to not run the pipeline, but to return the compiled template for debugging and validation purposes. Remove it when you're ready to execute it.
Also, you'll likely need to do some experimenting with templateParameters to get something acceptable to the pipeline, like
declaring it as an array "templateParameters": [ { "actions": [ your actions ] } ] (as I said, I haven't actually done this, but the documentation suggests this might be a good path to explore).
I am using mongodb with PyMongo and I would like to separate schema definition from the rest of the application. I have file user_schema.jsonwith schema for user collection:
{
"collMod": "user",
"validator": {
"$jsonSchema": {
"bsonType": "object",
"required": ["name"],
"properties": {
"name": {
"bsonType": "string"
}
}
}
}
}
Then in db.py:
with open("user_schema.json", "r") as coll:
data = OrderedDict(json.loads(coll.read())) # Read JSON schema.
name = data["collMod"] # Get name of collection.
db.create_collection(name) # Create collection.
db.command(data) # Add validation to the collection.
Is there any way to add unique index to name field in user collection without changing db.py (only by changing user_schema.json)? I know I can use this:
db.user.create_index("name", unique=True)
however, then I have information about the collection in two places. I would like to have all configuration of the collection in the user_schema.json file. I need something like that:
{
"collMod": "user",
"validator": {
"$jsonSchema": {
...
}
},
"index": {
"name": {
"unique": true
}
}
}
No, you won't be able to do that without changing db.py.
The contents of user_schema.json is an object that can be passed to db.command to run the collmod command.
In order to create an index, you need to run the createIndexes command, or one of the helpers that calls this.
It is not possible to complete both of these actions with a single command.
A simple modification would be to store an array of commands to run in user_schema.json, and have db.py iterate the array and run each command.
I want to add account, which has some information readable by all users. According to documentation the user needs to have permissions can_get_all_acc_detail. So I'm trying to add those with creating new role:
tx = self.iroha.transaction([
self.iroha.command('CreateRole', role_name='info', permissions=[primitive_pb2.can_get_all_acc_detail])
])
tx = IrohaCrypto.sign_transaction(tx, account_private_key)
net.send_tx(tx)
Unfortunately after sending transaction I see status:
status_name:ENOUGH_SIGNATURES_COLLECTED, status_code:9, error_code:0(OK)
But then it is taking 5 minutes until timeout.
I've notices that transaction json has different way of embedding permissions than in generic block:
payload {
reduced_payload {
commands {
create_role {
role_name: "info_account"
permissions: can_get_all_acc_detail
}
}
creator_account_id: "admin#example"
created_time: 1589408498074
quorum: 1
}
}
signatures {
public_key: "92f9f9e10ce34905636faff41404913802dfce9cd8c00e7879e8a72085309f4f"
signature: "568b69348aa0e9360ea1293efd895233cb5a211409067776a36e6647b973280d2d0d97a9146144b9894faeca572d240988976f0ed224c858664e76416a138901"
}
In compare in genesis.block it is:
{
"createRole": {
"roleName": "money_creator",
"permissions": [
"can_add_asset_qty",
"can_create_asset",
"can_receive",
"can_transfer"
]
}
},
I'm using iroha version 1.1.3 (but also tested on 1.1.1), python iroha sdh version is 0.0.5.5.
does the account you used to execute the 'Create Role' command have the "can_create_role" permission?
I tried creating a Data Transfer Service using bigquery_datatransfer. I used the following python library,
pip install --upgrade google-cloud-bigquery-datatransfer
Used the method
create_transfer_config(parent, transfer_config)
I have defined the transfer_config values for the data_source_id: amazon_s3
transfer_config = {
"destination_dataset_id": "My Dataset",
"display_name": "test_bqdts",
"data_source_id": "amazon_s3",
"params": {
"destination_table_name_template":"destination_table_name",
"data_path": <data_path>,
"access_key_id": args.access_key_id,
"secret_access_key": args.secret_access_key,
"file_format": <>
},
"schedule": "every 10 minutes"
}
But while running the script I'm getting the following error,
ValueError: Protocol message Struct has no "destination_table_name_template" field.
The fields given inside the params are not recognized. Also, I couldn't find what are the fields to be defined inside the "params" struct
What are the fields to be defined inside the "params" of transfer_config to create the Data Transfer job successfully?
As you can see in the documentation, you should try putting your code inside the google.protobuf.json_format.ParseDict() function.
transfer_config = google.protobuf.json_format.ParseDict(
{
"destination_dataset_id": dataset_id,
"display_name": "Your Scheduled Query Name",
"data_source_id": "scheduled_query",
"params": {
"query": query_string,
"destination_table_name_template": "your_table_{run_date}",
"write_disposition": "WRITE_TRUNCATE",
"partitioning_field": "",
},
"schedule": "every 24 hours",
},
bigquery_datatransfer_v1.types.TransferConfig(),
)
Please let me know if it helps you
So I have an application that uses MongoDB as a database. The application makes use of a few collections.
When and how should I go about defining the "schema" of the database which includes setting up all the collections as well as indexes needed?
AFAIK, you are unable to define empty collections in MongoDB (correct me if I am wrong, if I can do this it will basically answer this question). Should I insert a dummy value for each collection and use that to setup all my indexes?
What is the best practice for this?
You don't create collections in MongoDB.
You just start using them immediately whether they “exist” or not.
Now to defining the “schema”. As I said, you just start using a collection, so, if you need to ensure an index, just go ahead and do this. No collection creation. Any collection will effectively be created when you first modify it (creating an index counts).
> db.no_such_collection.getIndices()
[ ]
> db.no_such_collection.ensureIndex({whatever: 1})
> db.no_such_collection.getIndices()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.no_such_collection",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"whatever" : 1
},
"ns" : "test.no_such_collection",
"name" : "whatever_1"
}
]
Create empty collection
This is how you could create empty collection in MongoDB using build in interactive terminal:
db.createCollection('someName'); // create empty collection
Just you don't really have to, because as someone pointed before, they will get created in real time once you start to interact with the database.
MongoDB is schema-less end of story, but ...
You could create your own class that interacts with mongo Database. In that class you could define rules that have to fulfilled before it can insert data to mongo collection, otherwise throw custom exception.
Or if you using node.js server-side you could install mongoose node package which allows you to interact with database in OOP style (Why bother to reinvent the wheel, right?).
Mongoose provides a straight-forward, schema-based solution to model
your application data. It includes built-in type casting, validation,
query building, business logic hooks and more, out of the box.
docs: mongoose NPM installation and basic usage
https://www.npmjs.com/package/mongoose
mongoose full documentation http://mongoosejs.com
Mongoose use example (defining schema and inserting data)
var personSchema = new Schema({
name: { type: String, default: 'anonymous' },
age: { type: Number, min: 18, index: true },
bio: { type: String, match: /[a-zA-Z ]/ },
date: { type: Date, default: Date.now },
});
var personModel = mongoose.model('Person', personSchema);
var comment1 = new personModel({
name: 'Witkor',
age: '29',
bio: 'Description',
});
comment1.save(function (err, comment) {
if (err) console.log(err);
else console.log('fallowing comment was saved:', comment);
});
Wrapping up ...
Being able to set schema along with restriction in our code doesn't change the fact that MongoDB itself is schema-less which in some scenarios is actually an advantage. This way if you ever decide to make changes to schema, but you don't bother about backward compatibility, just edit schema in your script, and you are done. This is the basic idea behind the MongoDB to be able to store different sets of data in each document with in the same collection. However, some restriction in code base logic are always desirable.
As of version 3.2, mongodb now provides schema validation at the collection level:
https://docs.mongodb.com/manual/core/schema-validation/
Example for create a schema :
db.createCollection("students", {
validator: {
$jsonSchema: {
bsonType: "object",
required: [ "name", "year", "major", "address" ],
properties: {
name: {
bsonType: "string",
description: "must be a string and is required"
},
year: {
bsonType: "int",
minimum: 2017,
maximum: 3017,
description: "must be an integer in [ 2017, 3017 ] and is required"
},
major: {
enum: [ "Math", "English", "Computer Science", "History", null ],
description: "can only be one of the enum values and is required"
},
gpa: {
bsonType: [ "double" ],
description: "must be a double if the field exists"
},
address: {
bsonType: "object",
required: [ "city" ],
properties: {
street: {
bsonType: "string",
description: "must be a string if the field exists"
},
city: {
bsonType: "string",
description: "must be a string and is required"
}
}
}
}
}
}
})
const mongoose = require("mongoose");
const RegisterSchema = mongoose.Schema({
username: {
type: String,
unique: true,
requied: true,
},
email: {
type: String,
unique: true,
requied: true,
},
password: {
type: String,
requied: true,
},
date: {
type: Date,
default: Date.now,
},
});
exports.module = Register = mongoose.model("Register", RegisterSchema);
I watched this tutorial.
You have already been taught that MongoDB is schemaless. However, in practice, we have a kind of "schema", and that is the object space of the object, whose relations a MongoDB database represents. With the ceveat that Ruby is my go-to language, and that I make no claims about exhaustiveness of this answer, I recommend to try two pieces of software:
1. ActiveRecord (part of Rails)
2. Mongoid (standalone MongoDB "schema", or rather, object persistence system in Ruby)
Expect a learning curve, though. I hope that others will point you to solutions in other great languages outside my expertise, such as Python.
1.Install mongoose:
npm install mongoose
2. Set-up connection string and call-backs
// getting-started.js
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/test');
//call-backs
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function() {
// we're connected!
});
3. Write your schema
var kittySchema = new mongoose.Schema({
name: String
});
4. Model the schema
var Kitten = mongoose.model('Kitten', kittySchema);
5. Create a document
var silence = new Kitten({ name: 'Tom' });
console.log(silence.name); // Prints 'Tom' to console
// NOTE: methods must be added to the schema before compiling it with mongoose.model()
kittySchema.methods.speak = function () {
var greeting = this.name
? "Meow name is " + this.name
: "I don't have a name";
console.log(greeting);
}
enter code here
var Kitten = mongoose.model('Kitten', kittySchema);
Functions added to the methods property of a schema get compiled into the Model prototype and exposed on each document instance:
var fluffy = new Kitten({ name: 'fluffy' });
fluffy.speak(); // "Meow name is fluffy"