Subgroup ArcPy list Query - python

Mornig, folks.
I have two equal sets of layers, disposed in subgroups in my ArcGIS Pro (2.9.0), as shown here.
It's important that they have the same name (Layer1, Layer2, ...) in both groups.
Now, I'm writing an ArcPy code that makes a Definition Query, but I want to do it only in one specific sub layer (Ex. Compare\Layer1 and Compare\Layer2).
For now, I have this piece of code that, I hope, can help.
p = arcpy.mp.ArcGISProject('current')
m = p.listMaps()[0]
l = m.listLayers()
for row in l:
print(row.name)
COD_QUERY = 123
for row in l:
if row.name in ('Compare\Layer1'):
row.definitionQuery = "CODIGO_EOL = {}".format(COD_QUERY)
print('ok')
When I write 'Compare\layer1' where's supposed to select only the Layer1 placed in the Compare group, the code doesn't work as expected and does the Query both in Compare\Layer1 and Base\Layer2. That's the exact problem tha I'm having.
Hope I can find some help with u guys. XD

The layer's name (or longName) does not include the group layer's name.
Try using a wildcard (follow the link and search for listLayers) and filter for the particular group layer. A group layer object has a method listLayers too, you can again leverage it to get a specific layer.
import arcpy
COD_QUERY = 123
project = arcpy.mp.ArcGISProject("current")
map = project.listMaps()[0]
filtered_group_layers = map.listLayers("Compare")
if filtered_group_layers and filtered_group_layers[0].isGroupLayer:
filtered_layers = filtered_group_layers[0].listLayers("Layer1")
if filtered_layers:
filtered_layers[0].definitionQuery = f"CODIGO_EOL = {COD_QUERY}"
Or you can use loops. The key here is to filter out the group layers using isGroupLayer property before accessing the layer's listLayers method.
import arcpy
COD_QUERY = 123
project = arcpy.mp.ArcGISProject("current")
map = project.listMaps()[0]
group_layers = (layer for layer in map.listLayers() if layer.isGroupLayer)
for group_layer in group_layers:
if group_layer.name in "Compare":
for layer in group_layer.listLayers():
if layer.name in "Layer1":
layer.definitionQuery = f"CODIGO_EOL = {COD_QUERY}"

Related

Catia select a feature from a specific instance in an assembly

Lets say I have an assembly like this:
MainProduct:
-Product1 (Instance of Part1)
-Product2 (Instance of Part2)
-Product3 (Instance of Part2)
-Product4 (Instance of Part3)
...
Now, I want to copy/paste a feature from Product3 into another one.
But I run into problems when selecting the feature programmatically, because there are 2 instances of the part of that feature.
I can't control which feature will be selected by CATIA.ActiveDocument.Selection.Add(myExtractReference)
Catia always selects the feature from Product2 instead of the feature from Product3. So the position of the pasted feature will be wrong!
Does anybody know this problem and has a solution to it?
Edit:
The feature reference which I want to copy already exists as a variable because it was newly created (an extract of selected geometry)
I could get help else where. Still want to share my solution. It's written in Python but in VBA its almost the same.
The clue is to access CATIA.Selection.Item(1).LeafProduct in order to know where the initial selection was made.
import win32com.client
import pycatia
CATIA = win32com.client.dynamic.DumbDispatch('CATIA.Application')
c_doc = CATIA.ActiveDocument
c_sel = c_doc.Selection
c_prod = c_doc.Product
# New part where the feature should be pasted
new_prod = c_prod.Products.AddNewComponent("Part", "")
new_part_doc = new_prod.ReferenceProduct.Parent
# from user selection
sel_obj = c_sel.Item(1).Value
sel_prod_by_user = c_sel.Item(1).LeafProduct # reference to the actual product where the selection was made
doc_from_sel = sel_prod_by_user.ReferenceProduct.Parent # part doc from selection
hb = doc_from_sel.Part.HybridBodies.Add() # new hybrid body for the extract. will be deleted later on
extract = doc_from_sel.Part.HybridShapeFactory.AddNewExtract(sel_obj)
hb.AppendHybridShape(extract)
doc_from_sel.Part.Update()
# Add the extract to the selection and copy it
c_sel.Clear()
c_sel.Add(extract)
sel_prod_by_catia = c_sel.Item(1).LeafProduct # reference to the product where Catia makes the selection
c_sel_copy() # will call Selection.Copy from VBA. Buggy in Python.
# Paste the extract into the new part in a new hybrid body
c_sel.Clear()
new_hb = new_part_doc.Part.HybridBodies.Item(1)
c_sel.Add(new_hb)
c_sel.PasteSpecial("CATPrtResultWithOutLink")
new_part_doc.Part.Update()
new_extract = new_hb.HybridShapes.Item(new_hb.HybridShapes.Count)
# Redo changes in the part, where the selection was made
c_sel.Clear()
c_sel.Add(hb)
c_sel.Delete()
# Create axis systems from Position object of sel_prd_by_user and sel_prd_by_catia
prod_list = [sel_prod_by_user, sel_prod_by_catia]
axs_list = []
for prod in prod_list:
pc_pos = pycatia.in_interfaces.position.Position(prod.Position) # conversion to pycata's Position object, necessary
# in order to use Position.GetComponents
ax_comp = pc_pos.get_components()
axs = new_part_doc.Part.AxisSystems.Add()
axs.PutOrigin(ax_comp[9:12])
axs.PutXAxis(ax_comp[0:3])
axs.PutYAxis(ax_comp[3:6])
axs.PutZAxis(ax_comp[6:9])
axs_list.append(axs)
new_part_doc.Part.Update()
# Translate the extract from axis system derived from sel_prd_by_catia to sel_prd_by_user
extract_ref = new_part_doc.Part.CreateReferenceFromObject(new_extract)
tgt_ax_ref = new_part_doc.Part.CreateReferenceFromObject(axs_list[0])
ref_ax_ref = new_part_doc.Part.CreateReferenceFromObject(axs_list[1])
new_extract_translated = new_part_doc.Part.HybridShapeFactory.AddNewAxisToAxis(extract_ref, ref_ax_ref, tgt_ax_ref)
new_hb.AppendHybridShape(new_extract_translated)
new_part_doc.Part.Update()
I would suggest a differed approach. Instead of adding references you get from somewhere (by name probably) add the actual instance of part to selection while iterating trough all the products. Or use instance Names to get the correct part.
Here is a simple VBA example of iterating one lvl tree and select copy paste scenario.
If you want to copy features, you have to dive deeper on the Instance objects.
Public Sub CatMain()
Dim ActiveDoc As ProductDocument
Dim ActiveSel As Selection
If TypeOf CATIA.ActiveDocument Is ProductDocument Then 'of all the checks that people are using I think this one is most elegant and reliable
Set ActiveDoc = CATIA.ActiveDocument
Set ActiveSel = ActiveDoc.Selection
Else
Exit Sub
End If
Dim Instance As Product
For Each Instance In ActiveDoc.Product.Products 'object oriented for ideal for us in this scenario
If Instance.Products.Count = 0 Then 'beware that products without parts have also 0 items and are therefore mistaken for parts
Call ActiveSel.Add(Instance)
End If
Next
Call ActiveSel.Copy
Call ActiveSel.Clear
Dim NewDoc As ProductDocument
Set NewDoc = CATIA.Documents.Add("CATProduct")
Set ActiveSel = NewDoc.Selection
Call ActiveSel.Add(NewDoc.Product)
Call ActiveSel.Paste
Call ActiveSel.Clear
End Sub

Maya python connect to multiple input

I’m really sorry if I’m asking a question that’s been already answered but I couldn’t find an answer.
I’m writing a code that would allow me to connect a translate of several controllers into a blendWeighted node input channels. The amount of the controllers may vary depending on the selection. I’m struggling with the part where they need to connect to a single blendWeighted node input. Could someone tell me how I could connect every new controller to the next input channel of the blendWeighted node?
I’m sorry if my code is a bit childlike, I’m still learning ^^;
sel = mc.ls(sl=True, fl=True)
drvConnect = []
for i in sel:
name = i.split('_Crv')[0]
dGP = mc.createNode('transform', n='%s_Drv'%name, p=i)
drvConnect.append(dGP)
sh = mc.listRelatives(i, shapes=True)[0]
blendX = mc.createNode('blendWeighted', n='%s_X'%name)
blendY = mc.createNode('blendWeighted', n='%s_Y'%name)
blendZ = mc.createNode('blendWeighted', n='%s_Z'%name)
mc.connectAttr(dGP + '.translateX', blendX +'.input')
mc.connectAttr(dGP + '.translateY', blendY +'.input')
mc.connectAttr(dGP + '.translateZ', blendZ +'.input')
I assume you only want to create a single blendWeighted node. If that's the case, consider that the blendWeighted node's input attribute is an array attribute, so if you want to connect multiple items to it you would need to specify the target index.
For example, to connect the three translate outputs of a node to the first three inputs of the blend node you would use something like this:
mc.connectAttr('ctrl.translateX', 'blend.input[0]')
mc.connectAttr('ctrl.translateY', 'blend.input[1]')
mc.connectAttr('ctrl.translateZ', 'blend.input[2]')
(Maya will take care of creating the items in the array)
In our case you could simply keep a counter of the added items while you loop through the selection and the transform components (just a guide - not tested):
sel = mc.ls(sl=True, fl=True)
drvConnect = []
blendNode = mc.createNode('blendWeighted', n='blend_main')
for i in sel:
name = i.split('_Crv')[0]
dGP = mc.createNode('transform', n='%s_Drv'%name, p=i)
drvConnect.append(dGP)
for comp in ('X', 'Y', 'Z'):
mc.connectAttr(
'{}.translate{}'.format(dGP, comp),
'{}.input[{}]'.format(blendNode, blendIndex))
blendIndex += 1

How can I remove two or more subnet from a network?

from ipaddress class, I know the address_exclude method. below is an example from the documentation:
>>> n1 = ip_network('192.0.2.0/28')
>>> n2 = ip_network('192.0.2.1/32')
>>> list(n1.address_exclude(n2))
[IPv4Network('192.0.2.8/29'), IPv4Network('192.0.2.4/30'),
IPv4Network('192.0.2.2/31'), IPv4Network('192.0.2.0/32')]
but what about if I want to remove two or more subnets from a network? for example, how can I delete from the 192.168.10.0/26 his subnets 192.168.10.24/29 and 192.168.10.48/28? the result should be 192.168.10.0/28, 192.168.10.16/29 and 192.168.10.32/28.
I'm trying to find a way to write the algoritm that I use in my mind using the address_exclude method but I can't. is there a simple way to implement what I just explained?
When you exclude one network from another, the result can be multiple networks (original one got split) - so, for the rest of the networks to exclude, you need to first find which part they would fit into before excluding them as well.
Here's one possible solution:
from ipaddress import ip_network, collapse_addresses
complete = ip_network('192.168.10.0/26')
# I chose the larger subnet for exclusion first, can be automated with network comparison
subnets = list(complete.address_exclude(ip_network('192.168.10.48/28')))
# other network to exclude
other_exclude = ip_network('192.168.10.24/29')
result = []
# Find which subnet the other exclusion will happen in
for sub in subnets:
# If found, exclude & add the result
if other_exclude.subnet_of(sub):
result.extend(list(sub.address_exclude(other_exclude)))
else:
# Other subnets can be added directly
result.append(sub)
# Collapse in case of overlaps
print(list(collapse_addresses(result)))
Output:
[IPv4Network('192.168.10.0/28'), IPv4Network('192.168.10.16/29'), IPv4Network('192.168.10.32/28')]
Expanding on my brain wave posted on #rdas's response, posting my solution.
It seems better to split the initial network into the smallest chunks you are asking, and do this for all ranges to be removed. Then exclude them from the list and return result.
from ipaddress import ip_network, collapse_addresses
def remove_ranges(mynetwork,l_of_ranges):
# find smallest chunk
l_chunk = sorted(list(set([x.split('/')[1] for x in l_of_ranges])))
l_mynetwork = list(ip_network(mynetwork).subnets(new_prefix=int(l_chunk[-1])))
l_chunked_ranges = [ ]
for nw in l_of_ranges:
l_chunked_ranges.extend(list(ip_network(nw).subnets(new_prefix=int(l_chunk[-1]))))
#l_removed_networks = [ ]
#for mynw in l_mynetwork:
# if not mynw in l_chunked_ranges:
# l_removed_networks.append(mynw)
#result = list(collapse_addresses(l_removed_networks))
result = list(collapse_addresses(set(l_mynetwork) - set(l_chunked_ranges)))
return [str(r) for r in result]
if __name__ == '__main__':
mynetwork = "10.110.0.0/16"
l_of_ranges = ["10.110.0.0/18","10.110.72.0/21","10.110.80.0/21","10.110.96.0/21"]
print(f"My network: {mynetwork}, Existing: {l_of_ranges} ")
a = remove_ranges(mynetwork,l_of_ranges)
print(f"Remaining: {a}")
With the result:
My network: 10.110.0.0/16, Existing: ['10.110.0.0/18', '10.110.72.0/21', '10.110.80.0/21', '10.110.96.0/21']
Remaining: ['10.110.64.0/21', '10.110.88.0/21', '10.110.104.0/21', '10.110.112.0/20', '10.110.128.0/17']
Which seems to be valid.

Get actual feature names from XGBoost model

I know this question has been asked several times and I've read them but still haven't been able to figure it out.
Like other people, my feature names at the end are shown as f56, f234, f12 etc. and I want to have the actual names instead of f-somethings! This is the part of the code related to the model:
optimized_params, xgb_model = find_best_parameters() #where fitting and GridSearchCV happens
xgdmat = xgb.DMatrix(X_train_scaled, y_train_scaled)
feature_names=xgdmat.feature_names
final_gb = xgb.train(optimized_params, xgdmat, num_boost_round =
find_optimal_num_trees(optimized_params,xgdmat))
final_gb.get_fscore()
mapper = {'f{0}'.format(i): v for i, v in enumerate(xgdmat.feature_names)}
mapped = {mapper[k]: v for k, v in final_gb.get_fscore().items()}
mapped
xgb.plot_importance(mapped, color='red')
I also tried this:
feature_important = final_gb.get_score(importance_type='weight')
keys = list(feature_important.keys())
values = list(feature_important.values())
data = pd.DataFrame(data=values, index=keys, columns=["score"]).sort_values(by = "score", ascending=False)
data.plot(kind='barh')
but still the features are shown as f+number. I'd really appreciate any help.
What I'm doing at the moment is to get the number at the end of fs, like 234 from f234 and use it in X_train.columns[234] to see what the actual name was. However, I'm having second thoughts as the name I'm getting this way is the actual feature f234 represents.
First make a dictionary from your original features and map them back to feature names.
# create dict to use later
myfeatures = X_train_scaled.columns
dict_features = dict(enumerate(myfeatures))
# feat importance with names f1,f2,...
axsub = xgb.plot_importance(final_gb )
# get the original names back
Text_yticklabels = list(axsub.get_yticklabels())
dict_features = dict(enumerate(myfeatures))
lst_yticklabels = [ Text_yticklabels[i].get_text().lstrip('f') for i in range(len(Text_yticklabels))]
lst_yticklabels = [ dict_features[int(i)] for i in lst_yticklabels]
axsub.set_yticklabels(lst_yticklabels)
print(dict_features)
plt.show()
Here is the example how it works:
The problem can be solved by using feature_names parameter when creating your xgb.DMatrix
xgdmat = xgb.DMatrix(X_train_scaled, y_train_scaled,feature_names=feature_names)

Error recognizing parameters for a spatial join using ArcPy

I'm trying to iterate a spatial join through a folder - then iterate a second spatial join through the outputs of the first.
This is my initial script:
import arcpy, os, sys, glob
'''This script loops a spatial join through all the feature classes
in the input folder, then performs a second spatial join on the output
files'''
#set local variables
input = "C:\\Users\\Ryck\\Test\\test_Input"
boundary = "C:\\Users\\Ryck\\Test\\area_Input\\boundary_Test.shp"
admin = "C:\\Users\\Ryck\\Test\\area_Input\\admi_Boundary_Test.shp"
outloc = "C:\\Users\\Ryck\\Test\\join_02"
#overwrite any files with the same name
arcpy.env.overwriteOutput = True
#perform spatial joins
for fc in input:
outfile = outloc + fc
join1 = [arcpy.SpatialJoin_analysis(fc,boundary,outfile) for fc in
input]
for fc in join1:
arcpy.SpatialJoin_analysis(fc,admin,outfile)
I keep receiving Error00732: Target Features: Dataset C does not exist or is not supported.
I'm sure this is a simple error, but none of the solutions that have previously been recommended to solve this error allow me to still output my results to their own folder.
Thanks in advance for any suggestions
You appear to be trying to loop through a given directory, performing the spatial join on (shapefiles?) contained therein.
However, this syntax is a problem:
input = "C:\\Users\\Ryck\\Test\\test_Input"
for fc in input:
# do things to fc
In this case, the for loop is iterating over a string. So each time through the loop, it takes one character at a time: first C, then :, then \... and of course the arcpy function fails with this input, because it expects a file path, not a character. Hence the error: Target Features: Dataset C does not exist...
To instead loop through files in your input directory, you need a couple extra steps. Build a list of files, and then iterate through that list.
arcpy.env.workspace = input # sets "workspace" to input directory, for next tool
shp_list = arcpy.ListFiles("*.shp") # list of all shapefiles in workspace
for fc in shp_list:
# do things to fc
(Ref. this answer on GIS.SE.)
After working through some kinks, and thanks to the advice of #erica, I decided to abandon my original concept of a nested for loop, and approach more simply. I'm still working on a GUI that will create system arguments that can be assigned to the variables and then used as parameters for the spatial joins, but for now, this is the solution I've worked out.
import arcpy
input = "C:\\Users\\Ryck\\Test\\test_Input\\"
boundary = "C:\\Users\\Ryck\\Test\\area_Input\\boundary_Test.shp"
outloc = "C:\\Users\\ryck\\Test\\join_01"
admin = "C:\\Users\\Ryck\\Test\\area_Input\\admin_boundary_Test.shp"
outloc1 = "C:\\Users\\Ryck\\Test\\join_02"
arcpy.env.workspace = input
arcpy.env.overwriteOutput = True
shp_list = arcpy.ListFeatureClasses()
print shp_list
for fc in shp_list:
join1 =
arcpy.SpatialJoin_analysis(fc,boundary,"C:\\Users\\ryck\\Test\\join_01\\" +
fc)
arcpy.env.workspace = outloc
fc_list = arcpy.ListFeatureClasses()
print fc_list
for fc in fc_list:
arcpy.SpatialJoin_analysis(fc,admin,"C:\\Users\\ryck\\Test\\join_02\\" +
fc)
Setting multiple environments and using the actual paths feels clunky, but it works for me at this point.

Categories

Resources