I'm trying to deploy a Custom Instance Template using gcloud deployment-manager, but I keep getting this error:
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1507833758152-55b5de788f540-e3be8bf6-a792d98e]: errors:
- code: RESOURCE_ERROR
location: /deployments/my-project/resources/worker-template
message: '{"ResourceType":"compute.v1.instanceTemplate","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global","message":"Invalid
value for field ''resource.properties'': ''''. Instance Templates must provide
instance properties.","reason":"invalid"}],"message":"Invalid value for field
''resource.properties'': ''''. Instance Templates must provide instance properties.","statusMessage":"Bad
Request","requestPath":"https://www.googleapis.com/compute/v1/projects/my-project/global/instanceTemplates","httpMethod":"POST"}}'
My python generate_config function is this:
def generate_config(context):
resources = [{
'type': 'compute.v1.instanceTemplate',
'name': 'worker-template',
'properties': {
'zone': context.properties['zone'],
'description': 'Worker Template',
'machineType': context.properties['machineType'],
'disks': [{
'deviceName': 'boot',
'type': 'PERSISTENT',
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': '/'.join([
context.properties['compute_base_url'],
'projects', context.properties['os_project'],
'global/images/family', context.properties['os_project_family']
])
}
}],
'networkInterfaces': [{
'network': '$(ref.' + context.properties['network'] + '.selfLink)',
'accessConfigs': [{
'name': 'External NAT',
'type': 'ONE_TO_ONE_NAT'
}]
}]
}
}]
return {'resources': resources}
Properties is not empty, so the error message doesn't make much sense. Any ideas?
Thx!
After reading this example, I just found that the correct structure for compute.v1.instanceTemplate is:
...
'type': 'compute.v1.instanceTemplate',
'name': 'worker-template',
'properties': {
'project': 'my-project',
'properties': {
'zone': context.properties['zone'],
...
}
}
...
The structure follows this doc
Related
OK, I'm sure there's something wrong with this jsonschema, but I just can't seem to wrap my head around the problem.
I'm not going to post the actual code, but a minimal example that reproduces the issue.
Here's what I had before, which worked fine:
person = {
'type': 'object',
'properties': {
'name': {
'type': 'string'
}
'auth_token': {
'type': 'string',
},
'username': {
'type': 'string'
},
'password': {
'type': 'string'
}
},
'oneOf': [
{
'required': ['auth_token']
},
{
'required': ['username', 'password']
}
],
'required': ['name']
}
The idea here was that you always need to provide the name of the person, and then either an auth token or a username and password pair. As I said above, this validation worked fine, since we have parametrized tests that send all posible combinations of invalid JSON and evaluate the resulting error message, and those tests pass.
But then a new requirement came in and I needed to add a second mutually exclusive required pair of fields, which I did in this way:
person = {
'type': 'object',
'properties': {
'name': {
'type': 'string'
}
'auth_token': {
'type': 'string',
},
'username': {
'type': 'string'
},
'password': {
'type': 'string'
},
'project_id': {
'type': 'number'
},
'contract_date_from': {
'type': 'string'
}
'contract_date_to': {
'type': 'string'
}
},
'allOf': [
{
'oneOf': [
{
'required': ['auth_token']
},
{
'required': ['username', 'password']
}
]
},
{
'oneOf': [
{
'required': ['project_id']
},
{
'required': ['contract_date_from', 'contract_date_to']
}
]
}
],
'required': ['name']
}
But now the second validation always fails, whether the json provided is valid or invalid. The error message I get is:
{'name': 'John Doe', 'auth_token': '9d9a324b-26de-4ac3-85eb-05566e4a7204', 'username': None, 'password': None, 'project_id': 2785, 'contract_date_from': None, 'contract_date_to': None} is valid under each of {'required': ['contract_date_from', 'contract_date_to']}, {'required': ['project_id']}
No matter what values I send in those three fields (ie. project id, contract date from and contract date to), it fails with the same error. I've tried leaving all three empty, completing all three, and all permutations in between, but the error stays the same.
I've been reading the documentation for json schema but I can't seem to grasp what's going on with this example. I'm considering trying different approaches for this, but I'd really like to understand why this is not working. Any help is appreciated!
{'name': 'John Doe', 'auth_token': '9d9a324b-26de-4ac3-85eb-05566e4a7204', 'username': None, 'password': None, 'project_id': 2785, 'contract_date_from': None, 'contract_date_to': None} is valid under each of {'required': ['contract_date_from', 'contract_date_to']}, {'required': ['project_id']}
Read the error message more carefully: you requested that project_id be provided, OR contract_date_from and contract_date_to are provided, but you are providing all three of these. Providing a null value in a property is still providing a property. The error message is confusing, but you'd be failing validation anyway because null is not a string. Your evaluator is simply running the allOf->anyOfs first, so that's the error that comes back first. You should still get the type violation errors as well, though (if you don't, that's a bug: evaluators are required to provide ALL errors, not just the first.)
You can make the errors better at the expense of brevity by adding the "type" checks to live next to the "required" keywords. That will ensure the oneOf keywords produce failures rather than successes and maybe make the error messages more obvious.
I am setting up entire GCP architecture using Deployment Manager using Python template structure.
I have tried to execute the script below:
'name': 'dataproccluster',
'type': 'dataproc.py',
'subnetwork': 'default',
'properties': {
'zone': ZONE_NORTH,
'region': REGION_NORTH,
'serviceAccountEmail': 'X#appspot.gserviceaccount.com',
'softwareConfig': {
'imageVersion': '1.4-debian9',
'properties': {
'dataproc:dataproc.conscrypt.provider.enable' : 'False'
}
},
'master': {
'numInstances': 1,
'machineType': 'n1-standard-1',
'diskSizeGb': 50,
'diskType': 'pd-standard',
'numLocalSsds': 0
},
'worker': {
'numInstances': 2,
'machineType': 'n1-standard-1',
'diskType': 'pd-standard',
'diskSizeGb': 50,
'numLocalSsds': 0
},
'initializationActions':[{
'executableFile': 'gs://dataproc-initialization-actions/python/pip-install.sh'
}],
'metadata': {
'PIP_PACKAGES':'requests_toolbelt==0.9.1 google-auth==1.6.31'
},
'labels': {
'environment': 'dev',
'data_type': 'X'
}
}
Which results in the following error:
Initialization action failed. Failed action 'gs://dataproc-initialization-actions/python/pip-install.sh',\
I would like to evaluate if it is an error on my side, or an API problem of any sort? I found Google tickets related to this topic covering CLI deployment, however they were marked as solved. I found nothing on Deployment Manager side.
If it is an error on my side what am I doing wrong?
AVOID EVAL
My question has been answered and I ended up using eval, but after some searching on what eval does and can do I ended up not using it and instead used an alternative found here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval#Do_not_ever_use_eval!
In my application i'm building the whole chart options in the backend and returning it as a json response
def get_chart_data(request):
chart = {
'title': {
'text': ''
},
'xAxis': {
'categories': [],
'title': {
'text': ''
},
'type': 'category',
'crosshair': True
},
'yAxis': [{
'allowDecimals': False,
'min': 0,
'title': {
'text': ''
}
}, {
'allowDecimals': False,
'min': 0,
'title': {
'text': ''
},
'opposite': True
}],
'series': [{
'type': 'column',
'yAxis': 1,
'name': '',
'data': []
}, {
'type': 'line',
'name': '',
'data': []
}, {
'type': 'line',
'name': '',
'data': []
}]
}
return JsonResponse(chart)
And then get the data using ajax and use the response for the data
Highcharts.chart('dashboard1', data);
I'm ok with this so far but i've run into problems if I want to use highcharts functions as part of the options, for example setting the color of text using Highcharts.getOptions().colors[0],
'title': {
'text': 'Rainfall',
'style': {
'color': Highcharts.getOptions().colors[0]
}
},
If i don't put quotes to this when building the options in views.py it would be treated as python code and result in an error, however if i add quotes to it, it will be treated as string in javascript which would not work.
Is this possible? or should i just build the options in javascript and just get the data part in the backend and not the whole thing.
You could return the JS code in Django as a string, and then you can run eval() on it, but executing code like that opens the possibility of an XSS attack, especially if the information is user-submittable.
Your best bet otherwise would be to create the styling on the JS end if possible, and manipulate the incoming data.
document.querySelector('a').addEventListener('click', function (e) {
e.preventDefault();
var complexJson = {"parent": {"child": "alert('Here is a nested alert!')"}}
var alertString = "alert('Here is a simple alert!')";
eval(complexJson["parent"]["child"])
eval(alertString)
})
Click me!
I'm trying to create a Premium SSD disk to attach to a VM in azure but can't seem to figure out how to specify that correctly - I keep ending up with a Standard HDD.
azure_client.compute_client.disks.create_or_update("my_resource_group", 'deleteme-' + str(disk_num), {
"location": "westus",
"disk_size_gb": 256,
'creation_data': {
'create_option': 'empty',
'sku': {
'name': 'Premium_LRS' # <=== What I want
}
},
'tags': {
"fake": "tags"
}
}).result().as_dict()
{
'id': '/subscriptions/5efe2633-26ac-4638-9f1f-6e24e494d9b4/resourceGroups/my_resource_group/providers/Microsoft.Compute/disks/deleteme-26',
'provisioning_state': 'Succeeded',
'name': 'deleteme-26',
'type': 'Microsoft.Compute/disks',
'time_created': '2019-02-05T00:37:41.907815Z',
'tags': {
'fake': 'tags'
},
'creation_data': {
'create_option': 'Empty'
},
'sku': {
'tier': 'Standard',
'name': 'Standard_LRS' # <== What I actually get
},
'location': 'westus',
'disk_size_gb': 256
}
I'm open to connecting the disk directly to the host at creation, but can't figure out the API for tagging the disk that way.
I've also tried also specifying 'tier': 'Premium' in the sku description - but no change. Here's the documentation I've found:
A bit embarrassing, but maybe someone else will do this in the future... I put the SKU in the wrong sub-dictionary. Azure doesn't yell at you if you put random stuff it doesn't understand in the creation_data section.
azure_client.compute_client.disks.create_or_update("my_resource_group", 'deleteme-' + str(disk_num), {
"location": "westus",
"disk_size_gb": 256,
'creation_data': {
'create_option': 'empty'
},
'sku': {
'name': 'Premium_LRS' # <=== Moved out of creation_data dict
}
'tags': {
"fake": "tags"
}
}).result().as_dict()
As far as i understand, Python Eve does not support a double level embed, can you confirm?
To better explain, given a document A referring to a document B referring to a document C, it is not possible to have A documents served by Eve with C embedded, right?
I think that this is not possible, since also in the docs says the following:
We do not support multiple layers embeddings
This doesn't work. I have tried it. The settings.py file will show you double embedding that i've tried and it doesn't work.
import os
from schemas.subjects import subject_schema
from schemas.units import units_schema
# We want to seamlessy run our API both locally and on Heroku. If running on
# Heroku, sensible DB connection settings are stored in environment variables.
# MONGO_URI = os.environ.get('MONGODB_URI', 'mongodb://user:user#localhost:27017/evedemo')
MONGO_HOST = 'localhost'
MONGO_PORT = 27017
MONGO_DBNAME = 'test'
# Enable reads (GET), inserts (POST) and DELETE for resources/collections
# (if you omit this line, the API will default to ['GET'] and provide
# read-only access to the endpoint).
RESOURCE_METHODS = ['GET', 'POST', 'DELETE']
# Enable reads (GET), edits (PATCH) and deletes of individual items
# (defaults to read-only item access).
ITEM_METHODS = ['GET', 'PATCH', 'DELETE']
# We enable standard client cache directives for all resources exposed by the
# API. We can always override these global settings later.
CACHE_CONTROL = 'max-age=20'
CACHE_EXPIRES = 20
# Our API will expose two resources (MongoDB collections): 'people' and
# 'works'. In order to allow for proper data validation, we define beaviour
# and structure.
people = {
# 'title' tag used in item links.
'item_title': 'person',
# by default the standard item entry point is defined as
# '/people/<ObjectId>/'. We leave it untouched, and we also enable an
# additional read-only entry point. This way consumers can also perform GET
# requests at '/people/<lastname>/'.
'additional_lookup': {
'url': 'regex("[\w]+")',
'field': 'lastname'
},
# Schema definition, based on Cerberus grammar. Check the Cerberus project
# (https://github.com/pyeve/cerberus) for details.
'schema': {
'id':{'type':'integer','required':True},
'firstname': {
'type': 'string',
'minlength': 1,
'maxlength': 10,
},
'lastname': {
'type': 'string',
'minlength': 1,
'maxlength': 15,
'required': True,
# talk about hard constraints! For the purpose of the demo
# 'lastname' is an API entry-point, so we need it to be unique.
'unique': True,
},
# 'role' is a list, and can only contain values from 'allowed'.
'role': {
'type': 'list',
'allowed': ["author", "contributor", "copy"],
},
# An embedded 'strongly-typed' dictionary.
'location': {
'type': 'dict',
'schema': {
'address': {'type': 'string'},
'city': {'type': 'string'}
},
},
'born': {
'type': 'datetime',
},
}
}
works = {
# if 'item_title' is not provided Eve will just strip the final
# 's' from resource name, and use it as the item_title.
#'item_title': 'work',
# We choose to override global cache-control directives for this resource.
'cache_control': 'max-age=10,must-revalidate',
'cache_expires': 10,
'schema': {
'title': {
'type': 'string',
'required': True,
},
'description': {
'type': 'string',
},
'owner': {
'type': 'objectid',
'required': True,
# referential integrity constraint: value must exist in the
# 'people' collection. Since we aren't declaring a 'field' key,
# will default to `people._id` (or, more precisely, to whatever
# ID_FIELD value is).
'data_relation': {
'resource': 'people',
# make the owner embeddable with ?embedded={"owner":1}
'embeddable': True
},
},
}
}
interests={
'cache_control': 'max-age=10,must-revalidate',
'cache_expires': 10,
'schema':{
'interest':{
'type':'objectid',
'required':True,
'data_relation': {
'resource': 'works',
# make the owner embeddable with ?embedded={"owner":1}
'embeddable': True
},
}
}
}
# The DOMAIN dict explains which resources will be available and how they will
# be accessible to the API consumer.
DOMAIN = {
'people': people,
'works': works,
'interests':interests,
}
The end-point:
http://127.0.0.1:5000/works?embedded={"owner":1} works
http://127.0.0.1:5000/interests?embedded={"interest":1}&embedded={"owner":1} doesn't return the desired result.