I do a lot of JavaScript projects, and I miss a great feature from PHPStorm. Now I don't understand Python that much. So hope you can help me.
This is what I want:
'test'.log => console.log('test');
test.log => console.log(test);
So with a single tabtrigger .log
I want to retrieve anything before the .log. And then I will transform it. How can I do this?
You can just create a plugin to retrieve the text before .log and replace it in the view:
import re
import sublime
import sublime_plugin
class PostSnippetLogCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
for sel in view.sel():
pos = sel.b
text_before = view.substr(sublime.Region(view.line(pos).a, pos))
# match the text before in reversed order
m = re.match(r"gol\.(\S*)", text_before[::-1])
if not m:
continue
# retrieve the text before .log and reestablish the correct order
text_content = m.group(1)[::-1]
# create the replacements text and region
replace_text = "console.log({});".format(text_content)
replace_reg = sublime.Region(pos - len(m.group(0)), pos)
# replace the text
view.replace(edit, replace_reg, replace_text)
Afterwards add this keybinding to trigger the command if it is prefixed with .log inside a javascript document.
{
"keys": ["tab"],
"command": "post_snippet_log",
"context":
[
{ "key": "selector", "operand": "source.js" },
{ "key": "preceding_text", "operator": "regex_contains", "operand": "\\.log$" },
],
},
Related
This question already has answers here:
Python: How to match nested parentheses with regex?
(13 answers)
Closed 27 days ago.
I'd like to create a python Regex that can select a Terraform resource block in its entirety. There are multiple resource blocks in a file (example below), and I want to select each one separately.
I've tried the following regexes. The first one gets caught up when there are multiple closing brackets in the code. The second one just selects the whole file.
1) match = re.search(r'resource.*?\{(.*?)\}', code, re.DOTALL)
2) match = re.search(r'resource.*?\{(.*)\}', code, re.DOTALL)
Sample file:
resource "aws_s3_bucket_notification" "aws-lambda-trigger" {
bucket = aws_s3_bucket.newbucket.id
lambda_function {
lambda_function_arn = aws_lambda_function.test_lambda.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = var.prefix
filter_suffix = var.suffix
}
}
resource "aws_s3_bucket" "newbucket" {
bucket = var.bucket_name
force_destroy = true
acl = var.acl_value
}
generally speaking you wnt to avoid using regex to parse structures such as html, xml or in this case hcl. Instead use a parser like pyhcl
import hcl
import json
with open("stack.tf") as main:
obj = hcl.load(main)
print(json.dumps(obj, indent=4))
print(f'newbucket-Force-Destroy: {obj["resource"]["aws_s3_bucket"]["newbucket"]["force_destroy"]}')
Then you can parse it all into a dict and just look up any values you are interested in.
output
$ python ./stack.py
{
"resource": {
"aws_s3_bucket_notification": {
"aws-lambda-trigger": {
"bucket": "aws_s3_bucket.newbucket.id",
"lambda_function": {
"lambda_function_arn": "aws_lambda_function.test_lambda.arn",
"events": [
"s3:ObjectCreated:*"
],
"filter_prefix": "var.prefix",
"filter_suffix": "var.suffix"
}
}
},
"aws_s3_bucket": {
"newbucket": {
"bucket": "var.bucket_name",
"force_destroy": true,
"acl": "var.acl_value"
}
}
}
}
newbucket-Force-Destroy: True
I have namespace already created and defined tags to resources. When I try adding new tags to the resources, the old tags are getting deleted.
As I would like to use the old data and return the value along with the new tags. Please help me with how I can achieve this.
get volume details from a specific compartment
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
get_volume_response = core_client.get_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value")
# Get the data from response
print(get_volume_response.data)
output
{
"availability_domain": "eto:PHX-AD-1",
"compartment_id": "ocid1.compartment.oc1..aaaaaaaapmj",
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
}
},
"display_name": "test_VG",
"freeform_tags": {},
"id": "ocid1.volumegroup.oc1.phx.abced",
"is_hydrated": null,
"lifecycle_state": "AVAILABLE",
"size_in_gbs": 100,
"size_in_mbs": 102400,
"source_details": {
"type": "volumeIds",
"volume_ids": [
"ocid1.volume.oc1.phx.xyz"
]
}
I want the API below to update the tag along with the old data.
old tag
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
update_volume_response = core_client.update_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags={
'OMCS':{
'INSTANCE': 'TEST',
'COMPONENT': 'temp1.mt.exy.vcn.com'
}
},
display_name = "TEMPMT01"))
print(update_volume_response.data)
I also tried but got an attribute error.
for tag in get_volume_response.data:
def_tag.appened(tag.defined_tags)
return (def_tag)
Please help on how can I append the defined_tags?
tags are defined as dict in OCI. Append works the same way as in appending dict.
Below I have pasted the code for updating the defined_tags for Block Volumes in OCI
import oci
from oci.config import from_file
configAPI = from_file() # Config file is read from user's home location i.e., ~/.oci/config
core_client = oci.core.BlockstorageClient(configAPI)
get_volume_response = core_client.get_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen")
# Get the data from response
volume_details = get_volume_response.data
defined_tags = getattr(volume_details, "defined_tags")
freeform_tags = getattr(volume_details, "freeform_tags")
# Add new tags as required. As defined_tags is a dict, addition of new key/value pair works like below.
# In case there are multiple tags to be added then use update() method of dict.
defined_tags["OMCS"]["INSTANCE"] = "TEST"
defined_tags["OMCS"]["COMPONENT"] = "temp1.mt.exy.vcn.com"
myJson={"freeform_tags":freeform_tags,"defined_tags": defined_tags}
update_volume_response = core_client.update_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags=defined_tags,
freeform_tags=freeform_tags))
print(update_volume_response.data)
This is an example of a JSON database that I will work with in my Python code.
{
"name1": {
"file": "abc"
"delimiter": "n"
},
"name2": {
"file": "def"
"delimiter": "n"
}
}
Pretend that a user of my code presses a GUI button that is supposed to change the name of "name1" to whatever the user typed into a textbox.
How do I change "name1" to a custom string without manually copying and pasting the entire JSON database into my actual code? I want the code to load the JSON database and change the name by itself.
Load the JSON object into a dict. Grab the name1 entry. Create a new entry with the desired key and the same value. Delete the original entry. Dump the dict back to your JSON file.
This is likely not the best way to perform the task. Use sed on Linux or its Windows equivalent (depending on your loaded apps) to make the simple stream-edit change.
If I understand clearly the task. Here is an example:
import json
user_input = input('Name: ')
db = json.load(open("db.json"))
db[user_input] = db.pop('name1')
json.dump(db, open("db.json", 'w'))
You can use the object_hook parameter that json.loads() accepts to detect JSON objects (dictionaries) that have an entry associated with the old key and re-associate its value with new key they're encountered.
This can be implement as a function as shown follows:
import json
def replace_key(json_repr, old_key, new_key):
def decode_dict(a_dict):
try:
entry = a_dict.pop(old_key)
except KeyError:
pass # Old key not present - no change needed.
else:
a_dict[new_key] = entry
return a_dict
return json.loads(json_repr, object_hook=decode_dict)
data = '''{
"name1": {
"file": "abc",
"delimiter": "n"
},
"name2": {
"file": "def",
"delimiter": "n"
}
}
'''
new_data = replace_key(data, 'name1', 'custom string')
print(json.dumps(new_data, indent=4))
Output:
{
"name2": {
"file": "def",
"delimiter": "n"
},
"custom string": {
"file": "abc",
"delimiter": "n"
}
}
I got the basic idea from #Mike Brennan's answer to another JSON-related question How to get string objects instead of Unicode from JSON?
I have a puppet manifest file - init.pp for my puppet module
In this file there are parameters for the class and in most cases they're written in the same way:
Example Input:
class test_module(
$first_param = 'test',
$second_param = 'new' )
What is the best way that I can parse this file with Python and get a dict object like this, which includes all the class parameters?
Example output:
param_dict = {'first_param':'test', 'second_param':'new'}
Thanks in Advance :)
Puppet Strings is a rubygem that can be installed on top of Puppet and can output a JSON document containing lists of the class parameters, documentation etc.
After installing it (see above link), run this command either in a shell or from your Python program to generate JSON:
puppet strings generate --emit-json-stdout init.pp
This will generate:
{
"puppet_classes": [
{
"name": "test_module",
"file": "init.pp",
"line": 1,
"docstring": {
"text": "",
"tags": [
{
"tag_name": "param",
"text": "",
"types": [
"Any"
],
"name": "first_param"
},
{
"tag_name": "param",
"text": "",
"types": [
"Any"
],
"name": "second_param"
}
]
},
"defaults": {
"first_param": "'test'",
"second_param": "'new'"
},
"source": "class test_module(\n $first_param = 'test',\n $second_param = 'new' ) {\n}"
}
]
}
(JSON trimmed slightly for brevity)
You can load the JSON in Python with json.loads, and extract the parameter names from root["puppet_classes"]["docstring"]["tags"] (where tag_name is param) and any default values from root["puppet_classes"]["defaults"].
You can use regular expression (straightforward but fragile)
import re
def parse(data):
mm = re.search('\((.*?)\)', data,re.MULTILINE)
dd = {}
if not mm:
return dd
matches = re.finditer("\s*\$(.*?)\s*=\s*'(.*?)'", mm.group(1), re.MULTILINE)
for mm in matches:
dd[mm.group(1)] = mm.group(2)
return dd
You can use it as follows:
import codecs
with codecs.open(filename,'r') as ff:
dd = parse(ff.read())
I don't know about the "best" way, but one way would be:
1) Set up Rspec-puppet (see google or my blog post for how to do that).
2) Compile your code and generate a Puppet catalog. See my other blog post for that.
Now, the Puppet catalog you compiled is a JSON document.
3) Visually inspect the JSON document to find the data you are looking for. Its precise location in the JSON document depends on the version of Puppet you are using.
4) You can now use Python to extract the data as a dictionary from the JSON document.
I want to get "path" from the below json file; I used json.load to get read json file and then parse one by one using for key, value in data.items() and it leads to lot of for loop (Say 6 loops) to get to the value of "path"; Is there any simple method to retrieve the value of path?
The complete json file can be found here and below is the snippet of it.
{
"products": {
"com.ubuntu.juju:12.04:amd64": {
"version": "2.0.1",
"arch": "amd64",
"versions": {
"20161129": {
"items": {
"2.0.1-precise-amd64": {
"release": "precise",
"version": "2.0.1",
"arch": "amd64",
"size": 23525972,
"path": "released/juju-2.0.1-precise-amd64.tgz",
"ftype": "tar.gz",
"sha256": "f548ac7b2a81d15f066674365657d3681e3d46bf797263c02e883335d24b5cda"
}
}
}
}
},
"com.ubuntu.juju:14.04:amd64": {
"version": "2.0.1",
"arch": "amd64",
"versions": {
"20161129": {
"items": {
"2.0.1-trusty-amd64": {
"release": "trusty",
"version": "2.0.1",
"arch": "amd64",
"size": 23526508,
"path": "released/juju-2.0.1-trusty-amd64.tgz",
"ftype": "tar.gz",
"sha256": "7b86875234477e7a59813bc2076a7c1b5f1d693b8e1f2691cca6643a2b0dc0a2"
}
}
}
}
},
You can use recursive generator:
def get_paths(data):
if 'path' in data:
yield data['path']
for k in data.keys():
if isinstance(data[k], dict):
for i in get_paths(data[k]):
yield i
for path in get_paths(json_data): # loaded json data
print(path)
Is path key always at the same depth in the loaded json (which is a dict so) ? If so, what about doing
products = loaded_json['products']
for product in products.items():
print product[1].items()[2][1].items()[0][1].items()[0][1].items()[0][1]['path']
If not, the answer of Yevhen Kuzmovych is clearly better, cleaner and more general than mine.
If you only care about the path, I think using any JSON parser is an overkill, you can just use built in re regex and use the following pattern (\"path\":\s*\")(.*\s*)(?=\",). I didn't test the whole file but should be able to figure out the best pattern fairly easily.
If you only need the file names present in path field, you can easily get them by simply parsing the file:
import re
files = []
pathre = re.compile(r'\s*"path"\s*:\s*"(.*?)"')
with open('file.json') as fd:
for line in fd:
if "path" in line:
m = pathre.match(line)
if m is not None:
files.append(m.group(1))
If you need to process simultaneously the path and sha256 fields:
files = []
pathre = re.compile(r'\s*"path"\s*:\s*"(.*?)"')
share = re.compile(r'\s*"sha256"\s*:\s*"(.*?)"')
path = None
with open('file.json') as fd:
for line in fd:
if "path" in line:
m = pathre.match(line)
path = m.group(1)
elif "sha256" in line:
m = share.match(line)
if path is not None:
files.append((path, m.group(1)))
path = None
You can use a query language like JSONPath. Here you find the Python implementation: https://pypi.python.org/pypi/jsonpath-rw
Assuming you have your JSON content already loaded, you can do something like the following:
from jsonpath_rw import jsonpath, parse
# Load your JSON content first from a file or from a string
# json_data = ...
jsonpath_expr = parse('products..path')
for match in jsonpath_expr.find(json_data):
print(match.value)
For a further discussion you can read this: Is there a query language for JSON?