PLEASE SEE EDIT FOR SHORT VERSION
Been hunting through the pythonOCC documentation for this.
I have .step file in inches. Here are the lines in the .step file for confirmation:
#50 = ( CONVERSION_BASED_UNIT( 'INCH', #122 )LENGTH_UNIT( )NAMED_UNIT( #125 ) );
#51 = ( NAMED_UNIT( #127 )PLANE_ANGLE_UNIT( )SI_UNIT( $, .RADIAN. ) );
#52 = ( NAMED_UNIT( #127 )SI_UNIT( $, .STERADIAN. )SOLID_ANGLE_UNIT( ) );
~~~
#122 = LENGTH_MEASURE_WITH_UNIT( LENGTH_MEASURE( 25.4000000000000 ), #267 );
File reads and displays in window:
When I use manual coordinates to make a bounding box, I find my box is wayyyy off:
Position is off because the STEP model is not at 0,0,0.
Turns out pythonOCC automatically converts everything into MM. When I manually enter in box dimensions in INCHES, it reads them as MM. I've tried to deal by converting everything manually too (inches * 25.4) but this is problematic and ugly.
I know pythonOCC uses the STEP file line # 122 as the conversion ratio as I've changed it from above to:
#122 = LENGTH_MEASURE_WITH_UNIT( LENGTH_MEASURE( 1.0 ), #267 );
When I do, my bounding box and step model line up perfectly... But I still know PythonOCC thinks it's converting to MM.
Anyone have any experience changing the default units for pythonocc?
I've tried to find in the following occ packages:
OCC.STEPControl, OCC.Display, OCC.AIS
and many others.
EDIT:
When I draw my box using my own coordinates like this:
minPoint = gp_Pnt(minCoords)
maxPoint = gp_Pnt(maxCoords)
my_box = AIS_Shape(BRepPrimAPI_MakeBox(minPoint, maxPoint).Shape())
display.Context.Display(my_box.GetHandle())
My coordinates are in Inches, but pythonOCC reads them as MM. If I can get my own coordinates to be read in Inches, this would be solved. Can't find anything in OCC.Display for how my coordinates are interpreted. Something like "OCC.Display.inputUnitsAre("INCHES")"?
EDIT 2:
Getting closer looking here:
https://dev.opencascade.org/doc/refman/html/class_units_a_p_i.html
Under UnitsAPI_SystemUnits and SetCurrentUnit... Though I'm not sure how to implement in python yet to test. Working on it.
You'll find documentation for the units here
Take a look at the OCC.Extended.DataExchange module, you'll see the following function:
def write_step_file(a_shape, filename, application_protocol="AP203"):
""" exports a shape to a STEP file
a_shape: the topods_shape to export (a compound, a solid etc.)
filename: the filename
application protocol: "AP203" or "AP214"
"""
# a few checks
assert not a_shape.IsNull()
assert application_protocol in ["AP203", "AP214IS"]
if os.path.isfile(filename):
print("Warning: %s file already exists and will be replaced" % filename)
# creates and initialise the step exporter
step_writer = STEPControl_Writer()
Interface_Static_SetCVal("write.step.schema", "AP203")
# transfer shapes and write file
step_writer.Transfer(a_shape, STEPControl_AsIs)
status = step_writer.Write(filename)
assert status == IFSelect_RetDone
assert os.path.isfile(filename)
By default, OCC writes units in millimeters, so I'm curious what function / method was used to export your STEP file.
Interface_Static_SetCVal("Interface_Static_SetCVal("write.step.unit","MM")
The docs though state that this methods Defines a unit in which the STEP file should be written. If set to unit other than MM, the model is converted to these units during the translation., so having to explicitly set this unit is unexpected.
Related
MWE
To generate PlantUML diagrams in (sub)folder: /Diagrams/ I use the following python script:
from plantuml import PlantUML
import os
from os.path import abspath
from shutil import copyfile
os.environ['PLANTUML_LIMIT_SIZE'] = str(4096 * 4) # set max with to 4 times the default (16,384)
server = PlantUML(url='http://www.plantuml.com/plantuml/img/',
basic_auth={},
form_auth={}, http_opts={}, request_opts={})
diagram_dir = "./Diagrams"
#directory = os.fsencode()
for file in os.listdir(diagram_dir):
filename = os.fsdecode(file)
if filename.endswith(".txt"):
server.processes_file(abspath(f'./Diagrams/{filename}'))
It is used to generate for example the following test.txt file:
#startuml
'Enforce straight lines
skinparam linetype ortho
' Set direction of graph hierarchy
Left to Right direction
' create work package data
rectangle "something something something" as ffd0
rectangle "something something something" as ffd1
rectangle "something something something something something" as ffd2
rectangle "something something something something" as ffd3
rectangle "something something somethingsomethingsomething" as ffd4
rectangle "something something something something something something" as ffd5
rectangle "something something something something" as ffd6
rectangle "something something something " as ffd7
' Implement graph hierarchy
ffd0-->ffd1
ffd1-->ffd2
ffd2-->ffd3
ffd3-->ffd4
ffd4-->ffd5
ffd5-->ffd6
ffd6-->ffd7
#enduml
Expected behavior
Because I set the PLANTUML_LIMIT_SIZE variable to 16384 (pixels) as the FAQ suggests, I would expect this to fill up the picture of the diagram with all the blocks connected side by side up to a max width of 4096 * 4 pixels.
To test whether perhaps setting it from the python script was implemented incorrectly I also tried to set it manually with: set PLANTUML_LIMIT_SIZE=16384 to expect the same behavior as explained in the above paragraph (a picture filled up till 16384 pixels).
Observed behavior
Instead PlantUML cuts off the picture at 2000 horizontal pictures as shown in the figure below:
Question
How can I ensure the PlantUML does not cut off the blocks of the diagrams of n pixels (height or width), from a python script?
The best way I've found to prevent diagrams from being cut off, without trying to guess at the size or picking some arbitrarily large limit, is to select SVG output.
Note that setting PLANTUML_LIMIT_SIZE is only going to have an effect if you're running PlantUML locally, but it appears the Python interface you're using sends the diagram to the online service. I don't know the internals of that interface, but per the documentation you should be able to get SVG output by using http://www.plantuml.com/plantuml/svg/ as the service URL.
If you need the final image in PNG format, you will need to convert it with another tool.
Approach 1:
To prevent the diagram from being cut off I followed the following steps:
Downloaded the plantuml.jar from this location http://sourceforge.net/projects/plantuml/files/plantuml.jar/download
Put the diagram which I wrote in a someLargeDiagram.txt file, in the same directory as the plantuml.jar file.
Opened terminal on Ubuntu 20.04 in that same directory and ran:
java -jar plantuml.jar -verbose someLargeDiagram.txt
That successfully generated the diagram as .png file, which was not cut off.
Approach 2:
After creating even larger graphs, they got cut-off again, and it gave the message to increase the PLANTUML_LIMIT_SIZE. I tried passing the size as an argument in the commandline using: java -jar plantuml.jar -verbose -PLANTUML_LIMIT_SIZE=8192 Diagrams/latest.uml however that did not work, nor did ..-PLANTUML_LIMIT_SIZE 8192... This link suggested one could set it as an environment variable, so I did that in Ubuntu 20.04 using command: export PLANTUML_LIMIT_SIZE 8192, after which I successfully created a larger diagram that was not cut-off with command:
java -jar plantuml.jar -verbose Diagrams/latest.uml
I don't know if there is anything that can be done to speed up my code at all, probably not by much if at all, but I thought I would ask here.
I am working on a python script for a program that uses a custom embedded python interpreter so I can only use the default libraries. External libraries like Pillow and Numpy don't work because they changed the name of the python dll and so the precompiled libraries can't interact with it.
This program doesn't support pasting transparent images from the clipboard outside of its own proprietary format. So I'm writing a script to cover that feature. It grabs the CF_DIBv5 format from the clipboard using ctypes and checks to see if it is 32bpp and that an alphamask exists.
Here's the slow part. I then need to isolate the alpha channel and save it as its own separate image. I can do this easily enough. Just grab a Long from the byte string, & the mask to get the alpha channel, and add pack it back to my new bitmap bytestring. On a small 300x300 image, this takes close to 10 seconds. Which isn't horrible. I will gladly live with that. However, I fear it's going to be horribly slow on larger megapixel images.
I'm not showing the complete code here because it's a horrible ugly mess and most of it is just defining the structures I'm using for my bitmap class and getting ctypes working. But here are the important parts where I loop over the data.
rowsizemask = calcRowSize(24,bmp.header.bV5Width) #returns bytes per row needed
rowmaskpadding = b'\x00'*(rowsizemask - bmp.header.bV5Width*3) #creates padding bytes
#loop over image data
for y in range(bmp.header.bV5Height):
for x in range(bmp.header.bV5Width):
offset, color = unpack(offset,">L",buff) #calls struct.unpack in custom function
color = color[0] & bmp.header.bV5AlphaMask #gets alpha channel
newbmp.pixels += struct.pack(">3B", color,color,color) #creates 24bpp listing
newbmp.pixels += rowmaskpadding #pad row to meet BMP specs
So what do you think? Am I missing something obvious? Or is this about as good as it's going to get with pure python only?
Okay, so after some more digging. I realized I could use ctypes.create_string_buffer to create a binary string of the perfect size and then use slices to change the values.
There are more tiny optimizations and code cleanups I can do but this has taken it from a script that can easily take several minutes to complete on a 900x900 pixel image, to just a few seconds.
Is this the best option? No idea, but it works. And it's faster than I had thought possible. See the edited code here. The changes are minor.
rowSizeMask = calcRowSize(24,bmp.header.bV5Width) #returns bytes per row needed
paddingLength = (rowSizeMask = bmp.header.bV5Width*3)
rowMaskPadding = b'\x00'*paddingLength #creates padding bytes
writeOffset = 0
#create pixel buffer
#rowsize mask includes padding, multiply by height for total byte count
newBmp.pixels = ctypes.create_string_buffer(bmp.heaer.bV5Height * rowSizeMask)
#loop over image data
for y in range(bmp.header.bV5Height):
for x in range(bmp.header.bV5Width):
offset, color = unpack(offset,">L",buff) #calls struct.unpack in custom function
color = color[0] & bmp.header.bV5AlphaMask #gets alpha channel
newBmp.pixels[writeOffset:writeOffset+3] = struct.pack(">3B", color,color,color) #creates 24bpp listing
writeOffset += 3
newBmp.pixels += rowMaskPadding #pad row to meet BMP specs
writeOffset += paddingLength
I'm trying to work out how to create a batch operation tool in ArcCatalog, based on all .img raster files in a directory. I do not need to change the code, but I need to set the correct parameters.
Here's my code:
'''This script uses map algebra to find values in an
elevation raster greater than a specified value.'''
import os
import arcpy
#switches on Spatial Analyst
arcpy.CheckOutExtension('Spatial')
#loads the spatial analyst module
from arcpy.sa import *
#overwrites any previous files of same name
arcpy.overwriteOutput=True
# Specify the input folder and cut-offs
inDirectory = arcpy.GetParameterAsText(0)
cutoffElevation = int(arcpy.GetParameterAsText(1))
for i in os.listdir(inDirectory):
if os.path.splitext(i)[1] == '.img':
inRaster = os.path.join(inDirectory, i)
outRaster = os.path.join(inDirectory, os.path.splitext(i)[0] + '_above_' + str(cutoffElevation) + '.img')
# Make a map algebra expression and save the resulting raster
tmpRaster = Raster(inRaster) > cutoffElevation
tmpRaster.save(outRaster)
# Switch off Spatial Analyst
arcpy.CheckInExtension('Spatial')
In the parameters I have selected:
Input Raster Raster Dataset - direction Input, Multivalue yes
Output Raster Raster Dataset - direction output
Cut off elevation - string, direction input
I add the images I want in the input raster, select the output raster and cut off elevation. But I get the error:
line 13, in
cutoffElevation =int(arcpy.GetparameterAsText(1)).
ValueError: invalid literal for int() with base 10
Does anybody know how to fix this?
You have three input parameters shown in that dialog box screenshot, but only two are described in the script. (The output raster outRaster is being defined in line 15, not as an input parameter.)
The error you're getting is because the output raster (presumably a file path and file name) can't be converted to an integer.
There are two ways to solve that:
Change the input parameters within that tool definition, so you're only feeding in input raster (parameter 0) and cut off elevation (parameter 1).
Change the code so it's looking for the correct parameters that are currently defined -- input raster (parameter 0) and cut off elevation (parameter 2).
inDirectory = arcpy.GetParameterAsText(0)
cutoffElevation = int(arcpy.GetParameterAsText(2))
Either way, you're making sure that the GetParameterAsText command is actually referring to the parameter you really want.
I'm having issues trying to set a default cell size for polygon to raster conversion. I need to convert a buffered stream (polygon) to a raster layer, so that I can burn the stream into a DEM. I'd like to automate this process to include it in a larger script.
My main problem is that the PolygonToRaster_conversion() tool is not allowing me to set the cell size to a raster layer value. It's also not obeying the default raster cell size I'm trying to set in the environment. Instead, it consistently uses the default "extent divided by 250".
Here is my script for this process:
# Input Data
Input_DEM = "C:\\GIS\\DEM\\dem_30m.grid"
BufferedStream = "C:\\GIS\\StreamBuff.shp"
# Environment Settings
arcpy.env.cellSize = Input_DEM
# Convert to Raster
StreamRaster = "C:\\GIS\\Stream_Rast.grid"
arcpy.PolygonToRaster_conversion(BufferedStream, "FID", StreamRaster, "CELL_CENTER", "NONE", Input_DEM)
This produces the following error:
"Cell size must be greater than zero."
The same error occurs if I type out the path for the DEM layer.
I've also tried manually typing in a number for the cell size. This works, but I want to generalize the usability of this tool.
What I really don't understand is that I used the DEM layer as the cell size manually through the ArcGIS interface and this worked perfectly!!
Any help will be greatly appreciated!!!
There are several options here. First, you can use the raster band properties to extract the cell size and insert that into the PolygonToRaster function. Second, try using the MINOF parameter in the cell size environment setting.
import arcpy
# Input Data
Input_DEM = "C:\\GIS\\DEM\\dem_30m.grid"
BufferedStream = "C:\\GIS\\StreamBuff.shp"
# Use the describe function to get at cell size
desc = arcpy.Describe(Input_DEM)
cellsize = desc.meanCellWidth
# Convert to Raster
StreamRaster = "C:\\GIS\\Stream_Rast.grid"
arcpy.PolygonToRaster_conversion(BufferedStream, "FID", StreamRaster, "CELL_CENTER", "NONE", cellsize)
I am interested in creating a dmg disk image on MacOS X from Python, and came across the following solution: How do I create a nice-looking DMG for Mac OS X using command-line tools?
However, I am running into a strange issue related to path lengths. The following script illustrates my problem:
import os
import Image
for NAME in ['works', 'doesnotwork']:
if os.path.exists(NAME + '.dmg'):
os.remove(NAME + '.dmg')
if os.path.exists('/Volumes/' + NAME):
raise Exception("Need to eject /Volumes/%s first" % NAME)
nx = 256
ny = 256
image = Image.new("RGB", (nx, ny))
for i in range(nx):
for j in range(ny):
image.putpixel((i, j), (i, 0, j))
os.system('hdiutil create -volname %s -fs HFS+ -size 10m %s.dmg' % (NAME, NAME))
os.system('hdiutil attach -readwrite -noverify -noautoopen %s.dmg' % NAME)
os.mkdir('/Volumes/%s/.background' % NAME)
image.save('/Volumes/%s/.background/background.png' % NAME, 'PNG')
apple_script = """osascript<<END
tell application "Finder"
tell disk "%s"
open
set current view of container window to icon view
set toolbar visible of container window to false
set statusbar visible of container window to false
set the bounds of container window to {{100, 100, 355, 355}}
set theViewOptions to the icon view options of container window
set the background picture of theViewOptions to file ".background:background.png"
close
open
end tell
end tell
END""" % NAME
os.system(apple_script)
If run, the background will get correctly set in the disk image called 'works', and not in the one called 'doesnotwork'. It seems that I am limited to 5 characters for the volume name. However, if I shorten the name of the folder use to store the background, e.g. to .bkg instead of .background, then I can use a longer volume name, which suggests this is an issue related to the length of the overall path. Does anyone know at what level there is a limit on the path length? Is there a workaround to allow arbitrarily long paths?
EDIT: I am using MacOS 10.6 - the script seems to work correctly on 10.7
I believe you have to escape your quotes. You can't have quotes inside of quotes without escaping them. For example it should be, tell application \"Finder\". You have many places where your quotes are not used properly. If you look at the example script you linked to that person used single and double quotes cleverly to avoid this issue. My suggestion is you fix that.
Plus you can't refer to a file in applescript like this... ".background:background.png". Applescript doesn't know what that means when a path begins with a period. A path in applescript begins with the name of the hard drive like Macintosh HD. You need to put the whole path there in a proper applescript format. And that needs to be quoted too with escaped quotes.
Good luck.