I'm writing a python script to find error in attribute codes in a feature class. In order to find some of these errors I need to use the select by location tool. But, the select by location tool only takes layers as inputs so I have to create a layer from the feature class. So if I update the error code field in the layer file how do I then populate the error code field in the original feature class?
Update
One can use the arcpy data access toolbox's UpdateCursor, which is newer and faster than the original form of the UpdateCursor I initially described.
error_code=-1
with arcpy.da.UpdateCursor('lulcTV', ['error_field', 'VALUE']) as coverCSR:
for tree in coverCSR:
species = tree[1] # returns'VALUE'. Not really needed, but good to know about
tree[0] = error_code # sets first requested field, "error_field"
coverCSR.updateRow(tree)
Original answer
Seems like you could use an UpdateCursor. Example:
coverCSR=arcpy.UpdateCursor('lulcTV')
error_code=-1
for tree in coverCSR:
species=tree.getValue('VALUE') # not really needed, but good to know about
tree.setValue('error_field', error_code)
coverCSR.updateRow(tree)
This iterates over all rows, one by one.
Related
According to the documentation of tf.dataset.shuffle, it will fill in a buffer with size k then shuffle inside of it. Tho I don't want the order of data to be changed, I want it to be buffered. Then I found there is tf.dataset.prefetch, which says "This allows later elements to be prepared while the current element is being processed."
From the description I guess prefetch is what I want (i.e. pre-loading the data while the pervious data are being used in training), but while trying to look into the code of tf.dataset.shuffle to see if they actually call tf.dataset.prefetch, I got stuck in these lines (paste them below), cannot find where is shuffle_dataset_v3 defined.
variant_tensor = gen_dataset_ops.shuffle_dataset_v3(
input_dataset._variant_tensor, # pylint: disable=protected-access
buffer_size=self._buffer_size,
seed=self._seed,
seed2=self._seed2,
seed_generator=gen_dataset_ops.dummy_seed_generator(),
reshuffle_each_iteration=self._reshuffle_each_iteration,
**self._flat_structure)
My major question is whether prefetch is the replacement of shuffle in terms of buffering the data, and it would also be nice if someone can point me to where shuffle_dataset_v3 was implemented?
Yes. Prefetch is for buffering data.
gen_dataset_ops, and other gen_xxx_ops are not included in source code because it is automatically generated by bazel to wrap C++ implementation for use in python. You should be able to find these gen_xxx_ops code in your local installation. For example, ${PYTHON_ROOT}/site-packages/tensorflow/python/ops/gen_dataset_ops.py
Is there an easy way to extract a list of all variables with start attribute from a Modelica model? The ultimate goal is to run a simulation until it reaches steady-state, then run a python script that compares the values of start attribute against the steady-state value, so that I can identify start values that were chosen badly.
In the Dymola Python interface I could not find such a functionality. Another approach could be to generate the modelDescription.xml and parse it, I assume the information is available somewhere in there, but for that approach I also feel I need help to get started.
Similar to this answer, you can extract that info easily from the modelDescription.xml inside a FMU with FMPy.
Here is a small runnable example:
from fmpy import read_model_description
from fmpy.util import download_test_file
from pprint import pprint
fmu_filename = 'CoupledClutches.fmu'
download_test_file('2.0', 'CoSimulation', 'MapleSim', '2016.2', 'CoupledClutches', fmu_filename)
model_description = read_model_description(fmu_filename)
start_vars = [v for v in model_description.modelVariables if v.start and v.causality == 'local']
pprint(start_vars)
The files dsin.txt and dsfinal.txt might help you around with this. They have the same structure, with values at the start and at the end of the simulation; by renaming dsfinal.txt to dsin.txt you can start your simulation from the (e.g. steady-state) values you computed in a previous run.
It might be worthy working with these two files if you have in mind already to use such values for running other simulations.
They give you information about solvers/simulation settings, that you won't find in the .mat result files (if they're of any interest for your case)
However, if it is only a comparison between start and final values of variables that are present in the result files anyway, a better choice might be to use python and a library to read the result.mat file (dymat, modelicares, etc). It is then a matter of comparing start-end values of the signals of interest.
After some trial and error, I came up with this python code snippet to get that information from modelDescription.xml:
import xml.etree.ElementTree as ET
root = ET.parse('modelDescription.xml').getroot()
for ScalarVariable in root.findall('ModelVariables/ScalarVariable'):
varStart = ScalarVariable.find('*[#start]')
if varStart is not None:
name = ScalarVariable.get('name')
value = varStart.get('start')
print(f"{name} = {value};")
To generate the modelDescription.xml file, run Dymola translation with the flag
Advanced.FMI.GenerateModelDescriptionInterface2 = true;
Python standard library has several modules for processing XML:
https://docs.python.org/3/library/xml.html
This snippet uses ElementTree.
This is just a first step, not sure if I missed something basic.
I´m using arcMap, Esri. I have a polyline layer with information in text which I need to convert to number values. I want accomplish this using scripting with Python in the Field calculator.
My challenge:
Using field values in one field I want to define values in another field.
In my case I need to define the width of a road in numbers, depending on the field value in text from another field.
The road "widthNumber" will depend on the value of another fields value "widthText".
there are a number of ways you can do this. I'm making the assumption both fields are in the same feature class/shapefile and the widthNumber field is an int of some type.
The ideal case is to use a switch case (Java/C#), but those don't exist in python. So we can either use a dictionary to sort of recreate the switch or simply load up on ifs'. I'm a fan of cleaner code so I've included the logic for the dictionary. But you can always move that into a bunch of ifs' as you deem necessary.
All you have to do is use the pre-logic script code area to write a function which accepts the widthText and returns the widthNumber.
def calculateValue(text):
switcher ={
"Ones":1,
"Fivers":5,
"Threebs":3,
"Twotow":2,
"Four":4,
"Fivers":5
}
return switcher.get(text,"Invalid")`
Then in the bottom section you just call that function and pass in the attribute..
calculateValue(!widthText!)
In this example I did not write in any error handling to deal with invalid values, depending on how your values are stored it may be smart to ensure everything is in the same case (Upper/Lower) to ensure consistency.
I want to use an existing powerpoint presentation to generate a series of reports:
In my imagination the powerpoint slides will have content in such or similar form:
Date of report: {{report_date}}
Number of Sales: {{no_sales}}
...
Then my python app opens the powerpoint, fills in the values for this report and saves the report with a new name.
I googled, but could not find a solution for this.
There is python-pptx out there, but this is all about creating a new presentation and not inserting values in a template.
Can anybody advice?
Ultimately, barring some other library which has additional functionality, you need some sort of brute force approach to iterate the Slides collection and each Slide's respective Shapes collection in order to identify the matching shape (unless there is some other library which has additional "Find" functionality in PPT). Here is brute force using only win32com:
from win32com import client
find_date = r'{{report_date}}'
find_sales = r'{{no_sales}}'
report_date = '01/01/2016' # Modify as needed
no_sales = '604' # Modify as needed
path = 'c:/path/to/file.pptx'
outpath = 'c:/path/to/output.pptx'
ppt = client.Dispatch("PowerPoint.Application")
pres = ppt.Presentations.Open(path, WithWindow=False)
for sld in pres.Slides:
for shp in sld.Shapes:
with shp.TextFrame.TextRange as tr:
if find_date in tr.Text
tr.Replace(find_date, report_date)
elif find_sales in shp.TextFrame.Characters.Text
tr.Replace(find_sales, no_sales)
pres.SaveAs(outpath)
pres.Close()
ppt.Quit()
If these strings are inside other strings with mixed text formatting, it gets trickier to preserve existing formatting, but it should still be possible.
If the template file is still in design and subject to your control, I would consider giving the shape a unique identifier like a CustomXMLPart or you could assign something to the shapes' AlternativeText property. The latter is easier to work with because it doesn't require well-formed XML, and also because it's able to be seen & manipulated via the native UI, whereas the CustomXMLPart is only accessible programmatically, and even that is kind of counterintuitive. You'll still need to do shape-by-shape iteration, but you can avoid the string comparisons just by checking the relevant property value.
I tried this on a ".ppx" file I had hanging around.
A microsoft office power point ".pptx" file is in ".zip" format.
When I unzipped my file, I got an ".xml" file and three directories.
My ".pptx" file has 116 slides comprised of 3,477 files and 22 directories/subdirectories.
Normally, I would say it is not workable, but since you have only two short changes you probably could figure out what to change and zip the files to make a new ".ppx" file.
A warning: there are some xml blobs of binary data in one or more of the ".xml" files.
You can definitely do what you want with python-pptx, just perhaps not as straightforwardly as you imagine.
You can read the objects in a presentation, including the slides and the shapes on the slides. So if you wanted to change the text of the second shape on the second slide, you could do it like this:
slide = prs.slides[1]
shape = slide.shapes[1]
shape.text = 'foobar'
The only real question is how you find the shape you're interested in. If you can make non-visual changes to the presentation (template), you can determine the shape id or shape name and use that. Or you could fetch the text for each shape and use regular expressions to find your keyword/replacement bits.
It's not without its challenges, and python-pptx doesn't have features specifically designed for this role, but based on the parameters of your question, this is definitely a doable thing.
Sometimes, you need to define values dynamically, (like datetime now, random strings, random integers, file contents, etc.) and use them across different steps without being explicit or hard-coding the value.
So, my question is how could I define variables inside of steps (the correct way to do it) to use these variables in the following steps.
Some example
Given A random string of length "100" as "my_text"
And I log in to my platform
And I ask to add the following post:
| title | description |
| Some example of title | {{my_text}} |
When I submit the post form
Then The posts table shows these posts:
| title | description |
| Some example of title | {{my_text}} |
And I delete any post containing in the description "{{my_text}}"
This is a basic example trying to explain why I would like to define variables in steps and save them in the context to use it in the following steps.
My idea was to modify before_step and after_step methods... to set a variable in context to store my custom variables like this:
def before_step(context):
if not hasattr(context, 'vars'):
context.vars = {}
if hasattr(context, table) and context.table:
parse_table(context)
def parse_table(context):
# Here use a regex to check each cell and look for `"{{<identifier>}}"` and if match, replace the cell value by context.vars[identifier] so the step "the posts table shows these posts will never know what is `{{my_text}}` it will be abstract seeing the random string.
Scenarios Outline, use something like this defining variables like "<some_identifier>" and then for each example replace the value in the step.
It's basically to reproduce the behaviour but for any kind of step, simple or using tables.
Is it the right way to do something like this?
From Behave docs on the context:
When behave launches into a new feature or scenario it adds a new layer to the context, allowing the new activity level to add new values, or overwrite ones previously defined, for the duration of that activity. These can be thought of as scopes:
#given('I request a new widget for an account via SOAP')
def step_impl(context):
client = Client("http://127.0.0.1:8000/soap/")
// method client.Allocate(...) returns a dict
context.response = client.Allocate(customer_first='Firstname',
customer_last='Lastname', colour='red')
// context vars can be set more directly
context.new_var = "My new variable!"
#then('I should receive an OK SOAP response')
def step_impl(context):
eq_(context.response['ok'], 1)
cnv = str(context.new_var)
print (f"This is my new variable:'{cnv}'"
So, the value can be set using dot notation and retrieved the same.
To answer this question, one needs note:
Does the test data needs to be controlled externally? For example, test data can be inputed from command line so that the value can be chosen explicitly.
If the answer is no, then probably we should not bother hard coding anything in the feature file. And we can leave the data generated in one step, save it in context, and accessed again in any followed step.
The example I can think is exactly like what the question described. Do we care what the random text content we generated, posted and verified? Probably not. Then we should not expose such detail to user (i.e. feature file) since it is not important to the behaviour we are testing.
If the answer is yes, we do need a bit hack to make it happen. I am experiencing a case like this. What I want is to change the test data when I run the test so I don't have to hard code them in the feature files as in a table or scenario outline. How can I do this?
I can use -D option in command line to pass in as many user data as possible, which can then be accessed in context.config.userdata dictionary in any steps. If the number of test data is very limited. This approach is an easy way to go. But if the test data set contains many data that no one want type one by one in command line, it can be stored externally, for example, a ini file with section names like testdata_1...testdata_n, and thus a string can be passed in from command line to be used to address the section name in this config file. And the test data can be read out in either before_all, or before_scenario, etc., and get used in all steps.
In my experience , you cannot create a dynamic value in feature file.
for example, this step :
Given A random string of length "100" as "my_text"
I dont see any way to change {my_text} each time you run the scenario. (not consider to use behave -D to parse the value to context.config.userdata,I think it is also a wrong approach)
Even Scenario Outline, it actually splits to many scenarios. each scenario will have
different value but the value of {my_text} is already defined in Examples table for each scenario.
The way makes a step dynamic is using Step definition (Code layer).
You can generate a random number in step definition #given('A random string of length "100" as "{my_text}"')
And use context.my_text to store the created number and using it arround.
I also agree with Murphy Meng that you don't need to expose the generated random number
explicitly in feature file. You know which step will use that number, simply use context.my_text in that step to get the value. That's it.