Is there a way of converting SCAD files to STL format efficiently in Python? I have around 3000 files to be converted to STL. Plus, there are some different formats.
I tried searching on the internet for some libraries but was not able to find any suitable one (I am using Windows OS) Anyone has any idea?
you can run openscad from command line, see documentation,
and prepare every command by python (example in python3)
from os import listdir
from subprocess import call
files = listdir('.')
for f in files:
if f.find(".scad") >= 0: # get all .scad files in directory
of = f.replace('.scad', '.stl') # name of the outfile .stl
cmd = 'call (["openscad", "-o", "{}", "{}"])'.format(of, f) #create openscad command
exec(cmd)
in python3.5 and higher subprocess.call should be replaced by subrocess.run()
Related
I have been using gdal from the command line to convert an asc file to a GeoJSON output. I can do this successfully:
gdal_polygonize.py input.asc -f "GeoJSON" output.json
Now I wish to use Python and follow this process for a range of files.
import gdal
import glob
for file in glob.glob("dir/*.asc"):
new_name = file[:-4] + ".json"
gdal.Polygonize(file, "-f", "GeoJSON", new_name)
Hpwever, for exactly the same file I get the following error TypeError: in method 'Polygonize', argument 1 of type 'GDALRasterBandShadow *'
Why does the command line version work and the python version not?
The easiest way to find what is wrong with your call to gdal.Polygonize is to investigate the documentation for that function. You can find it here by going through the C algorithms API. Admittedly, GDAL's documentation isn't the most coherent and easy to access. This is doubly true for the conversion of the C API to Python.
GDALPolygonize
GDALPolygonize (GDALRasterBandH hSrcBand,
GDALRasterBandH hMaskBand,
OGRLayerH hOutLayer,
int iPixValField,
char ** papszOptions,
GDALProgressFunc pfnProgress,
void * pProgressArg
)
You can see that the first two arguments are RasterBand types. The output type is an OGRLayer, and there are other (in this case unnecessary) options.
To use gdal.Polygonize() you will need to open your input file with gdal, get a raster band, and pass that into the function. Similarly, you will need to create a new geojson vector file, and pass its layer into the function.
Using subprocess
As an alternative, you could employ python's subprocess module to call the same command-line program that you already know.
import subprocess
import glob
import os
for f in glob.glob("dir/*.asc"): # don't override python's file variable
out_file = f[:-4] + ".json"
in_file = os.path.join("dir", f) # need the full path of the input
cmdline = ['gdal_polygonize.py', in_file, ,"-f", "GeoJSON", out_file]
subprocess.call(cmdline)
I am trying in a Python script to import a tar.gz file from HDFS and then untar it. The file comes as follow 20160822073413-EoRcGvXMDIB5SVenEyD4pOEADPVPhPsg.tar.gz, it has always the same structure.
In my python script, I would like to copy it locally and the extract the file. I am using the following command to do this:
import subprocess
import os
import datetime
import time
today = time.strftime("%Y%m%d")
#Copy tar file from HDFS to local server
args = ["hadoop","fs","-copyToLocal", "/locationfile/" + today + "*"]
p=subprocess.Popen(args)
p.wait()
#Untar the CSV file
args = ["tar","-xzvf",today + "*"]
p=subprocess.Popen(args)
p.wait()
The import works perfectly but I am not able to extract the file, I am getting the following error:
['tar', '-xzvf', '20160822*.tar']
tar (child): 20160822*.tar: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
put: `reportResults.csv': No such file or directory
Can anyone help me?
Thanks a lot!
Try with the shell option:
p=subprocess.Popen(args, shell=True)
From the docs:
If shell is True, the specified command will be executed through the
shell. This can be useful if you are using Python primarily for the
enhanced control flow it offers over most system shells and still want
convenient access to other shell features such as shell pipes,
filename wildcards, environment variable expansion, and expansion of ~
to a user’s home directory.
And notice:
However, note that Python itself offers implementations of many
shell-like features (in particular, glob, fnmatch, os.walk(),
os.path.expandvars(), os.path.expanduser(), and shutil).
In addition to #martriay answer, you also got a typo - you wrote "20160822*.tar", while your file's pattern is "20160822*.tar.gz"
When applying shell=True, the command should be passed as a whole string (see documentation), like so:
p=subprocess.Popen('tar -xzvf 20160822*.tar.gz', shell=True)
If you don't need p, you can simply use subprocess.call:
subprocess.call('tar -xzvf 20160822*.tar.gz', shell=True)
But I suggest you use more standard libraries, like so:
import glob
import tarfile
today = "20160822" # compute your common prefix here
target_dir = "/tmp" # choose where ever you want to extract the content
for targz_file in glob.glob('%s*.tar.gz' % today):
with tarfile.open(targz_file, 'r:gz') as opened_targz_file:
opened_targz_file.extractall(target_dir)
I found a way to do what I needed, instead of using os command, I used python tar command and it works!
import tarfile
import glob
os.chdir("/folder_to_scan/")
for file in glob.glob("*.tar.gz"):
print(file)
tar = tarfile.open(file)
tar.extractall()
Hope this help.
Regards
Majid
I have many csv files with this format:
Latitude,Longitude,Concentration
53.833399,-122.825257,0.021957
53.837893,-122.825238,0.022642
....
My goal is to produce GeoTiff files based on the information within these files (one tiff file per csv file), preferably using python. This was done several years ago on the project I am working on, however how they did it before has been lost. All I know is that they most likely used GDAL.
I have attempted to do this by researching how to use GDAL, but this has not got me anywhere, as there are limited resources and I have no knowledge of how to use this.
Can someone help me with this?
Here is a little code I adapted for your case. You need to have the GDAL directory with all the *.exe in added to your path for it to work (in most cases it's C:\Program Files (x86)\GDAL).
It uses the gdal_grid.exe util (see doc here: http://www.gdal.org/gdal_grid.html)
You can modify as you wish the gdal_cmd variable to suits your needs.
import subprocess
import os
# your directory with all your csv files in it
dir_with_csvs = r"C:\my_csv_files"
# make it the active directory
os.chdir(dir_with_csvs)
# function to get the csv filenames in the directory
def find_csv_filenames(path_to_dir, suffix=".csv"):
filenames = os.listdir(path_to_dir)
return [ filename for filename in filenames if filename.endswith(suffix) ]
# get the filenames
csvfiles = find_csv_filenames(dir_with_csvs)
# loop through each CSV file
# for each CSV file, make an associated VRT file to be used with gdal_grid command
# and then run the gdal_grid util in a subprocess instance
for fn in csvfiles:
vrt_fn = fn.replace(".csv", ".vrt")
lyr_name = fn.replace('.csv', '')
out_tif = fn.replace('.csv', '.tiff')
with open(vrt_fn, 'w') as fn_vrt:
fn_vrt.write('<OGRVRTDataSource>\n')
fn_vrt.write('\t<OGRVRTLayer name="%s">\n' % lyr_name)
fn_vrt.write('\t\t<SrcDataSource>%s</SrcDataSource>\n' % fn)
fn_vrt.write('\t\t<GeometryType>wkbPoint</GeometryType>\n')
fn_vrt.write('\t\t<GeometryField encoding="PointFromColumns" x="Longitude" y="Latitude" z="Concentration"/>\n')
fn_vrt.write('\t</OGRVRTLayer>\n')
fn_vrt.write('</OGRVRTDataSource>\n')
gdal_cmd = 'gdal_grid -a invdist:power=2.0:smoothing=1.0 -zfield "Concentration" -of GTiff -ot Float64 -l %s %s %s' % (lyr_name, vrt_fn, out_tif)
subprocess.call(gdal_cmd, shell=True)
I am new at programming and I have written a script to extract text from a vcf file. I am using a Linux virtual machine and running Ubuntu. I have run this script through the command line by changing my directory to the file with the vcf file in and then entering python script.py.
My script knows which file to process because the beginning of my script is:
my_file = open("inputfile1.vcf", "r+")
outputfile = open("outputfile.txt", "w")
The script puts the information I need into a list and then I write it to outputfile. However, I have many input files (all .vcf) and want to write them to different output files with a similar name to the input (such as input_processed.txt).
Do I need to run a shell script to iterate over the files in the folder? If so how would I change the python script to accommodate this? I.e writing the list to an outputfile?
I would integrate it within the Python script, which will allow you to easily run it on other platforms too and doesn't add much code anyway.
import glob
import os
# Find all files ending in 'vcf'
for vcf_filename in glob.glob('*.vcf'):
vcf_file = open(vcf_filename, 'r+')
# Similar name with a different extension
output_filename = os.path.splitext(vcf_filename)[0] + '.txt'
outputfile = open(output_filename, 'w')
# Process the data
...
To output the resulting files in a separate directory I would:
import glob
import os
output_dir = 'processed'
os.makedirs(output_dir, exist_ok=True)
# Find all files ending in 'vcf'
for vcf_filename in glob.glob('*.vcf'):
vcf_file = open(vcf_filename, 'r+')
# Similar name with a different extension
output_filename = os.path.splitext(vcf_filename)[0] + '.txt'
outputfile = open(os.path.join(output_dir, output_filename), 'w')
# Process the data
...
You don't need write shell script,
maybe this question will help you?
How to list all files of a directory?
It depends on how you implement the iteration logic.
If you want to implement it in python, just do it;
If you want to implement it in a shell script, just change your python script to accept parameters, and then use shell script to call the python script with your suitable parameters.
I have a script I frequently use which includes using PyQt5 to pop up a window that prompts the user to select a file... then it walks the directory to find all of the files in the directory:
pathname = first_fname[:(first_fname.rfind('/') + 1)] #figures out the pathname by finding the last '/'
new_pathname = pathname + 'for release/' #makes a new pathname to be added to the names of new files so that they're put in another directory...but their names will be altered
file_list = [f for f in os.listdir(pathname) if f.lower().endswith('.xls') and not 'map' in f.lower() and not 'check' in f.lower()] #makes a list of the files in the directory that end in .xls and don't have key words in the names that would indicate they're not the kind of file I want
You need to import os to use the os.listdir command.
You can use listdir(you need to write condition to filter the particular extension) or glob. I generally prefer glob. For example
import os
import glob
for file in glob.glob('*.py'):
data = open(file, 'r+')
output_name = os.path.splitext(file)[0]
output = open(output_name+'.txt', 'w')
output.write(data.read())
This code will read the content from input and store it in outputfile.
This question already has answers here:
Why does passing variables to subprocess.Popen not work despite passing a list of arguments?
(5 answers)
Closed 1 year ago.
I'm trying to write a simple program in Python that takes all the music files from my Downloads folder and puts them in my Music folder. I'm using Windows, and I can move the files using the cmd prompt, but I get this error:
WindowsError: [Error 2] The system cannot find the file specified
Here's my code:
#! /usr/bin/python
import os
from subprocess import call
def main():
os.chdir("C:\\Users\Alex\Downloads") #change directory to downloads folder
suffix =".mp3" #variable holdinng the .mp3 tag
fnames = os.listdir('.') #looks at all files
files =[] #an empty array that will hold the names of our mp3 files
for fname in fnames:
if fname.endswith(suffix):
pname = os.path.abspath(fname)
#pname = fname
#print pname
files.append(pname) #add the mp3 files to our array
print files
for i in files:
#print i
move(i)
def move(fileName):
call("move /-y "+ fileName +" C:\Music")
return
if __name__=='__main__':main()
I've looked at the subprocess library and countless other articles, but I still have no clue what I'm doing wrong.
The subprocess.call method taks a list of parameters not a string with space separators unless you tell it to use the shell which is not recommended if the string can contain anything from user input.
The best way is to build the command as a list
e.g.
cmd = ["move", "/-y", fileName, "C:\Music"]
call(cmd)
this also makes it easier to pass parameters (e.g. paths or files) with spaces in to the called program.
Both these ways are given in the subprocess documentation.
You can pass in a delimited string but then you have to let the shell process the arguments
call("move /-y "+ fileName +" C:\Music", shell=True)
Also in this case for move there is a python command to do this. shutil.move
I'm not answering your question directly, but for such tasks, plumbum is great and would make your life so much easier. subprocess's api is not very intuitive.
There could be several issues:
fileName might contain a space in it so the move command only sees a part of filename.
if move is an internal command; you might need shell=True to run it:
from subprocess import check_call
check_call(r"move /-y C:\Users\Alex\Downloads\*.mp3 C:\Music", shell=True)
To move .mp3 files from Downloads folder to Music without subprocess:
from glob import glob
from shutil import move
for path in glob(r"C:\Users\Alex\Downloads\*.mp3"):
move(path, r"C:\Music")