I am working on a really big script right now where I have a csv file that I have removed rows and columns from, and edited the headers. I need to create one big shapefile for the entire csv file then create individual shape files for the units under one of the headers. I thougt the best way to do this would be to use arcpy.MakeXyEventLayer(), I saw in an arcgis sample script to then use arcpy.GetCount() for the output file of the xyEveveLayer, then arcpy.SaveToLayerFile_management() and arcpy.FeatureClassToShapefile_ conversion, but when I run the script only my csv file is getting edited and there is no layer in the output file. Is there a step I am missing or should this be making my shape.
this is the few lines of code I have used after all of he csv file editing to do what is described above:
outLyr = sys.arg[3] # shapefile layer output name
XYLyr.newLyr(csvOut, lyrOutFile, spRef, sys.argv[4], sys.argv[5]) # x coordinate column; y coordinate column
print arcpy.GetCount_management(lyrOutFile)
csv2LYR.saveLYR(lyrOutFile, curDir)
arcpy.SaveToLayerFile_management does not save data to a shapefile or any other kind of featureclass. It only creates a .lyr file, which points to a data source and renders it with saved symbology, etc. You can use arcpy.FeatureClassToShapefile_conversion to create the shapefile from the in-memory feature layer created with arcpy.MakeXyEventLayer. Help for that tool is here.
Related
I have a similar problem to the one described here below in a different question:
Reading from a CSV file while it is being written to
Unfortunately the solution is not explained.
I'd like to create a script that plots some variables in a .csv file dynamically. The .csv is updated everytime a sensor registers something.
My basic idea was to read the file each fixed period of time and if the number of rows is increased, to update the plot with the new variables.
How can I proceed?
I am not that experienced in csv
but take this logic
def writeandRead(one_row):
with open (path/file.csv,"a"): # it is append .. if your case write just change from a to w
write.row(one_row)
with open(whateve.csv,"r"):
red=csv.read #i don't know syntax ... take the logic😊
return red
for row in rows: #or you have a lists whatever dictionary
print(writeandRead(row))
I have a dataset where each rows are a 'column' dimension datapoint, and I want to process it and save each datapoint to a .pnb file . So I would need a snippet that creates a new file and saves in it.
For now I got this, but I am having trouble with the path file. Should it be relative to where my notebook is, or from C: ?
y = full_dataset[1,:]
z = np.save('./data/folder1/2.pnb',y)
I have a recording of tracking data in .edf format (SR-RESEARCH eyelink). I want to convert it to ASC/CSV format in python. I have the GUI application but I want to do it programmatically (in Python).
I found the package pyEDFlib but couldn't find an example to how convert the eye-tracking .edf file to .asc or .csv.
What will the best best way to do it?
Thanks
If I trust the page here: http://pyedflib.readthedocs.io/en/latest, you can run through all the signals in the file this way:
import pyedflib
import numpy as np
f = pyedflib.EdfReader("data/test_generator.edf")
n = f.signals_in_file
signal_labels = f.getSignalLabels()
sigbufs = np.zeros((n, f.getNSamples()[0]))
for i in np.arange(n):
sigbufs[i, :] = f.readSignal(i)
The pyEDFlib library simply reads the file into an EdfReader object.
Then you just need to go through and make row for each.
I assume that signal_labels (in the code above) will be an array with all the labels so make a comma separated string out of them
signal_labels_row = ",".join(signal_labels)
Then do the same for each signal, 1 comma separated String for each
Then simply write them in a file.
I can see they provide an example of how to read a file and extract all the data you need here
https://github.com/holgern/pyedflib/blob/master/demo/readEDFFile.py
Based on your answers i have created this python3 script to export all singnals to multiple .csv files https://github.com/folkien/pyEdfToCsv
I have few lists which i want to save it to a *.mat file. But according to scipy.io.savemat command documentation i Need to create a dictionary with the lists and then use the command to save it to a *.mat file.
If i save it according to the way mentioned in the docs the mat file will have structure with variables as the Arrays which i used in the dictionary. Now i have a Problem here, I have another program (which is not editable) will use the mat files and load them to plot some Graphs from the data. The program cannot process the structure because it is written in a way where if it loads a mat files and then it will directly process the Arrays in it.
So is there a way to save the mat file without using dictionaries? Please see the Image for more understanding
Thanks
This is the sample algorithm i used to save my *.mat file
import os
os.getcwd()
os.chdir(os.getcwd())
import scipy.io as sio
x=[1,2,3,4,5]
y=[234,5445,778] #can be 1000 lists
data={}
data['x']=x
data['y']=y
sio.savemat('test.mat',{'interpolated_data':data})
How about
scipy.io.savemat('interpolated_data_max_compare.mat',
{'NA1_X_order10_ACCE_ms2': np.zeros((3000,1)),
'NA1_X_order10_DISP_mm': np.ones((3000,1))})
Should work fine...
According to the code you added in your question, instead of sio.savemat('...', {'interpolated_data':data}), just save
sio.savemat('...', data)
and you should be fine: data is already a dictionary you don't need to add an extra level with {'interpolated_data': data} when saving.
You could use the Writing primitives directly
import scipy.io.matlab as ml
f=open("something.mat","wb")
mw=ml.mio5.MatFile5Writer(f)
mw.put_variables({"testVar":22})
I have a stack of CT-scan images. After processing (one image from those stack) CT-scan image using Matlab, I saved XY coordinates for each different boundary region in different Excel sheets as follows:
I = imread('myCTscan.jpeg');
BW = im2bw(I);
[coords, labeledImg] = bwboundaries(BW, 4, 'holes');
sheet = 1;
for n=1:length(coords);
xlswrite('fig.xlsx',coords{n,1},sheet,'A1');
sheet = sheet+1;
end
The next step is then to import this set of coordinates and plot it into Abaqus CAE Sketch for finite element analysis.
I figure out that my workflow is something like this:
Import Excel workbook
For each sheet in workbook:
2.1. For each row: read both column to get xy coordinates (each row has two column, x and y coordinate)
2.2. Put each xy coordinates inside a list
2.3. From list, sketch using spline method
Repeat step 2 for other sheets within the workbook
I searched for a while and found something like this:
from abaqus import *
lines= open('fig.xlsx', 'r').readlines()
pointList= []
for line in lines:
pointList.append(eval('(%s)' %line.strip()))
s1= mdb.models['Model-1'].ConstrainedSketch(name='mySketch', sheetSize=500.0)
s1.Spline(points= pointList)
But this only read XY coordinates from only one sheet and I'm stuck at step 3 above. Thus my problem is that how to read these coordinates in different sheets using Abaqus/Python (Abaqus 6.14, Python 2.7) script?
I'm new to Python programming, I can read and understand the syntax but can't write very well (I'm still struggling on how to import Python module in Abaqus). Manually type each coordinates (like in Abaqus' modelAExample.py tutorial) is practically impossible since each of my CT-scan image can have 100++ of boundary regions and 10k++ points.
I'm using:
Windows 7 x64
Abaqus 6.14 (with built in Python 2.7)
Excel 2013
Matlab 2016a with Image Processing Toolbox
You are attempting to read excel files as comma separated files. CSV files by definition can not have more than one tab. Your read command is interpreting the file as a csv and not allowing you to iterate over the tabs in your file (though it begs the question how your file is opening properly in the first place as you are saving an xlsx and reading a csv).
There are numerous python libraries that will parse and process XLS/XLSX files.
Take a look at pyxl and use it to read your file in.
You would likely use something like
from openpyxl import Workbook
(some commands to open the workbook)
listofnames=wb.sheetnames
for k in listofnames:
ws=wb.worksheets(k)
and then input your remaining commands.