I defined a function to fix and open the fits file in a list. Then call it twice to overly the fits picture. But when I used the parameters of called the function to plot, it shows that the I/O operation on closed file: "ValueError: I/O operation on closed file". I don't know why it happens. Thank you so much.
The following is the code of defined function
import numpy as np
import matplotlib.pyplot as plt
import applpy
from astropy.io import fits
from astropy.wcs import WCS
from astropy.wcs import FITSFixedWarning
import warnings
warnings.filterwarnings("ignore", category=FITSFixedWarning)
#### define the function to fix the fitsheader and get the head info.
def fix_fitshead(filelis):
with fits.open(filelis.replace('\n', ''), mode="readonly") as hdu:
da = hdu[0].data[:,:]
he = hdu[0].header
if he["NAXIS"] ==4:
he["NAXIS"] =2
for k in ["NAXIS3", "NAXIS4",
"CTYPE3", "CRVAL3", "CDELT3", "CROTA3", "CRPIX3",
"CTYPE4", "CRVAL4", "CDELT4", "CROTA4", "CRPIX4" ]:
if k in list(he.keys()):
he.remove(k)
hdu = fits.PrimaryHDU(da, he)
print('WCS=',WCS(he))
elif he["NAXIS"] ==3:
he["NAXIS"] =1
for k in ["NAXIS3","CTYPE3", "CRVAL3", "CRPIX3","CUNIT3", "LBOUND3"]:
if k in list(he.keys()):
he.remove(k)
if "CDELT3" in h3:
k1 = "CDELT3"
elif "CD3_3" in h3:
k1 = "CD3_3"
he.remove(k1)
hdu = fits.PrimaryHDU(da, he)
else:
hdu = hdu[0]
maxvalue = np.nanmax(da)
c = WCS(he).wcs_pix2world([[he["NAXIS1"]/2,he["NAXIS2"]/2]], 1)
coord = SkyCoord(ra=c[0][0]*u.deg, dec=c[0][1]*u.deg)
if "LINE" in he:
spec = he["LINE"]
elif 'MOLECULE' in he:
spec = he['MOLECULE']
elif "INSTRUME"in he:
spec = he['INSTRUME']
print(he["OBJECT"])
# print(maxvalue)
# PRODID = 'reduced-850um'
return maxvalue, spec, da, he, hdu, coord
##The following code is plot the overlay picture by calling the defined function.
file_gray = '/Users/hjma/Desktop/smoothdata/progress/13co_fits.list'
file_cont = '/Users/hjma/Desktop/smoothdata/progress/hcn10_fits.list'
with open(file_gray,'r') as f_gray:
gray_list = [row_gray.rstrip('\n') for row_gray in f_gray]
with open(file_cont,'r') as f_cont:
cont_list = [row_cont.rstrip('\n') for row_cont in f_cont]
for filegray in gray_list:
# print('filegray=',filegray)
#call the defined function
maxvalue1, spec1, d1, he1, hdu_gray, coord1 = fix_fitshead(filegray)
for filecont in cont_list:
# print('filecont=', filecont)
#call the defined function
maxvalue2, spec2, d2, he2, hdu_cont, coord2 = fix_fitshead(filecont)
print('d2=',d2)
fig = plt.figure()
fig.set_figwidth(4); fig.set_figheight(4)
ff = aplpy.FITSFigure(hdu_gray, figure=fig)
ff.show_colorscale(cmap="Blues")
ff.show_contour(hdu_cont, colors="red")
plt.plot([0], label="HCN 1-0", color="r")
plt.legend()
plt.tight_layout()
plt.close()
break`
I got the error:
---> 26 ff = aplpy.FITSFigure(hdu_gray, figure=fig)
....
ValueError: I/O operation on closed file
I added some general comments above on problems with your code you might want to resolve. I would give you a suggested re-write of your code, except I'm not 100% sure what it's meant to accomplish. It appears that, among other things, you want it to take 2D slices of 3D or 4D arrays, but as I noted above it doesn't actually achieve that goal.
Anyways, the reason for your error is specifically in the case where the data is already 2D, in your if/elif/else statement you have:
if he["NAXIS"] == 4:
...
hdu = fits.PrimaryHDU(da, he)
print('WCS=',WCS(he))
elif he["NAXIS"] == 3:
...
hdu = fits.PrimaryHDU(da, he)
else:
hdu = hdu[0]
In the first two cases you read the data from the file before closing the file (da = hdu[0].data[:,:]) and created a new HDU object from it. However, in the last case, you didn't do this and just pass the original HDU object from the closed file (hdu = hdu[0]). Since the file this is from is closed, the data in this HDU can't be ready any longer so you get "I/O operation on closed file" when you pass it to aplpy.FITSFigure and it tries to read the HDU data.
One way you could work around this is change that last line to hdu = fits.PrimaryHDU(da, he) like in the other cases, to create a new HDU from the already loaded data.
A better way, which according to your comment I think you might have found, is to refactor your code so that instead of passing fix_fitshead a filename, pass it an already open HDUList object and use it like:
with fits.open(filename) as hdul:
maxvalue1, spec1, d1, he1, hdu_gray, coord1 = fix_fitshead(hdul)
...
and don't close the file until you're actually done using it. This is a more flexible approach in general because it also allows you to use your code on FITS files that weren't directly opened from files on disk (e.g. for writing tests).
Related
I'm writing a script in python using Open Cascade Technology (using the pyOCCT package for Anaconda) to import STEP files, defeature them procedurally and re-export them. I want to preserve the product hierarchy, names and colours as much as possible. Currently the script can import STEP files, simplify all of the geometry while roughly preserving the hierarchy and re-export the step file. The problem is no matter how I approach the problem, I can't manage to make it preserve the colours of the STEP file in a few particular cases.
Here's the model I pass in to the script:
And here's the result of the simplification:
In this case, the simplification has worked correctly but the colours of some of the bodies were not preserved. The common thread is that the bodies that loose their colours are children of products which only have other bodies as their children (ie: they don't contain sub-products).
This seems to be related to the way that Open Cascade imports STEP files which are translated as follows:
Alright, now for some code:
from OCCT.STEPControl import STEPControl_Reader, STEPControl_Writer, STEPControl_AsIs
from OCCT.BRepAlgoAPI import BRepAlgoAPI_Defeaturing
from OCCT.TopAbs import TopAbs_FACE, TopAbs_SHAPE, TopAbs_COMPOUND
from OCCT.TopExp import TopExp_Explorer
from OCCT.ShapeFix import ShapeFix_Shape
from OCCT.GProp import GProp_GProps
from OCCT.BRepGProp import BRepGProp
from OCCT.TopoDS import TopoDS
from OCCT.TopTools import TopTools_ListOfShape
from OCCT.BRep import BRep_Tool
from OCCT.Quantity import Quantity_ColorRGBA
from OCCT.ShapeBuild import ShapeBuild_ReShape
from OCCT.STEPCAFControl import STEPCAFControl_Reader, STEPCAFControl_Writer
from OCCT.XCAFApp import XCAFApp_Application
from OCCT.XCAFDoc import XCAFDoc_DocumentTool, XCAFDoc_ColorGen, XCAFDoc_ColorSurf
from OCCT.XmlXCAFDrivers import XmlXCAFDrivers
from OCCT.TCollection import TCollection_ExtendedString
from OCCT.TDF import TDF_LabelSequence
from OCCT.TDataStd import TDataStd_Name
from OCCT.TDocStd import TDocStd_Document
from OCCT.TNaming import TNaming_NamedShape
from OCCT.Interface import Interface_Static
# DBG
def export_step(shape, path):
writer = STEPControl_Writer()
writer.Transfer( shape, STEPControl_AsIs )
writer.Write(path)
# DBG
def print_shape_type(label, shapeTool):
if shapeTool.IsFree_(label):
print("Free")
if shapeTool.IsShape_(label):
print("Shape")
if shapeTool.IsSimpleShape_(label):
print("SimpleShape")
if shapeTool.IsReference_(label):
print("Reference")
if shapeTool.IsAssembly_(label):
print("Assembly")
if shapeTool.IsComponent_(label):
print("Component")
if shapeTool.IsCompound_(label):
print("Compound")
if shapeTool.IsSubShape_(label):
print("SubShape")
# Returns a ListOfShape containing the faces to be removed in the defeaturing
# NOTE: For concisness I've simplified this algorithm and as such it *MAY* not produce exactly
# the same output as shown in the screenshots but should still do SOME simplification
def select_faces(shape):
exp = TopExp_Explorer(shape, TopAbs_FACE)
selection = TopTools_ListOfShape()
nfaces = 0
while exp.More():
rgb = None
s = exp.Current()
exp.Next()
nfaces += 1
face = TopoDS.Face_(s)
gprops = GProp_GProps()
BRepGProp.SurfaceProperties_(face, gprops)
area = gprops.Mass()
surf = BRep_Tool.Surface_(face)
if area < 150:
selection.Append(face)
#log(f"\t\tRemoving face with area: {area}")
return selection, nfaces
# Performs the defeaturing
def simplify(shape):
defeaturer = BRepAlgoAPI_Defeaturing()
defeaturer.SetShape(shape)
sel = select_faces(shape)
if sel[0].Extent() == 0:
return shape
defeaturer.AddFacesToRemove(sel[0])
defeaturer.SetRunParallel(True)
defeaturer.SetToFillHistory(False)
defeaturer.Build()
if (not defeaturer.IsDone()):
return shape# TODO: Handle errors
return defeaturer.Shape()
# Given the label of an entity it finds it's displayed colour. If the entity has no defined colour the parents are searched for defined colours as well.
def find_color(label, colorTool):
col = Quantity_ColorRGBA()
status = False
while not status and label != None:
try:
status = colorTool.GetColor(label, XCAFDoc_ColorSurf, col)
except:
break
label = label.Father()
return (col.GetRGB().Red(), col.GetRGB().Green(), col.GetRGB().Blue(), col.Alpha(), status, col)
# Finds all child shapes and simplifies them recursively. Returns true if there were any subshapes.
# For now this assumes all shapes passed into this are translated as "SimpleShape".
# "Assembly" entities should be skipped as we don't need to touch them, "Compound" entities should work with this as well, though the behaviour is untested.
# Use the print_shape_type(shapeLabel, shapeTool) method to identify a shape.
def simplify_subshapes(shapeLabel, shapeTool, colorTool, set_colours=None):
labels = TDF_LabelSequence()
shapeTool.GetSubShapes_(shapeLabel, labels)
#print_shape_type(shapeLabel, shapeTool)
#print(f"{shapeTool.GetShape_(shapeLabel).ShapeType()}")
cols = {}
for i in range(1, labels.Length()+1):
label = labels.Value(i)
currShape = shapeTool.GetShape_(label)
print(f"\t{currShape.ShapeType()}")
if currShape.ShapeType() == TopAbs_COMPOUND:
# This code path should never be taken as far as I understand
simplify_subshapes(label, shapeTool, colorTool, set_colours)
else:
''' See the comment at the bottom of the main loop for an explanation of the function of this block
col = find_color(label, colorTool)
#print(f"{name} RGBA: {col[0]:.5f} {col[1]:.5f} {col[2]:.5f} {col[3]:.5f} defined={col[4]}")
cols[label.Tag()] = col
if set_colours != None:
colorTool.SetColor(label, set_colours[label.Tag()][5], XCAFDoc_ColorSurf)'''
# Doing both of these things seems to result in colours being reset but the geometry doesn't get replaced
nshape = simplify(currShape)
shapeTool.SetShape(label, nshape) # This doesn't work
return labels.Length() > 0, cols
# Set up XCaf Document
app = XCAFApp_Application.GetApplication_()
fmt = TCollection_ExtendedString('MDTV-XCAF')
doc = TDocStd_Document(fmt)
app.InitDocument(doc)
shapeTool = XCAFDoc_DocumentTool.ShapeTool_(doc.Main())
colorTool = XCAFDoc_DocumentTool.ColorTool_(doc.Main())
# Import the step file
reader = STEPCAFControl_Reader()
reader.SetNameMode(True)
reader.SetColorMode(True)
Interface_Static.SetIVal_("read.stepcaf.subshapes.name", 1) # Tells the importer to import subshape names
reader.ReadFile("testcolours.step")
reader.Transfer(doc)
labels = TDF_LabelSequence()
shapeTool.GetShapes(labels)
# Simplify each shape that was imported
for i in range(1, labels.Length()+1):
label = labels.Value(i)
shape = shapeTool.GetShape_(label)
# Assemblies are just made of other shapes, so we'll skip this and simplify them individually...
if shapeTool.IsAssembly_(label):
continue
# This function call here is meant to be the fix for the bug described.
# The idea was to check if the TopoDS_Shape we're looking at is a COMPOUND and if so we would simplify and call SetShape()
# on each of the sub-shapes instead in an attempt to preserve the colours stored in the sub-shape's labels.
#status, loadedCols = simplify_subshapes(label, shapeTool, colorTool)
#if status:
#continue
shape = simplify(shape)
shapeTool.SetShape(label, shape)
# The code gets a bit messy here because this was another attempt at fixing the problem by building a dictionary of colours
# before the shapes were simplified and then resetting the colours of each subshape after simplification.
# This didn't work either.
# But the idea was to call this function once to generate the dictionary, then simplify, then call it again passing in the dictionary so it could be re-applied.
#if status:
# simplify_subshapes(label, shapeTool, colorTool, loadedCols)
shapeTool.UpdateAssemblies()
# Re-export
writer = STEPCAFControl_Writer()
Interface_Static.SetIVal_("write.step.assembly", 2)
Interface_Static.SetIVal_("write.stepcaf.subshapes.name", 1)
writer.Transfer(doc, STEPControl_AsIs)
writer.Write("testcolours-simplified.step")
There's a lot of stuff here for a minimum reproducible example but the general flow of the program is that we import the step file:
reader.ReadFile("testcolours.step")
reader.Transfer(doc)
Then we iterate through each label in the file (essentially every node in the tree):
labels = TDF_LabelSequence()
shapeTool.GetShapes(labels)
# Simplify each shape that was imported
for i in range(1, labels.Length()+1):
label = labels.Value(i)
shape = shapeTool.GetShape_(label)
We skip any labels marked as assemblies since they contain children and we only want to simplify individual bodies. We then call simplify(shape) which performs the simplification and returns a new shape, we then call shapeTool.SetShape() to bind the new shape to the old label.
The thing that doesn't work here is that as explained, Component3 and Component4 don't get marked as Assemblies and are treated as SimpleShapes and when they are simplified as one shape, the colours are lost.
One solution I attempted was to call a method simplify_subshapes() which would iterate through each of the subshapes, and do the same thing as the main loop, simplifying them and then calling SetShape(). This ended up being even worse as it resulted in those bodies not being simplified at all but still loosing their colours.
I also attempted to use the simplify_subshapes() method to make a dictionary of all the colours of the subshapes, then simplify the COMPOUND shape and then call the same method again to this time re-apply the colours to the subshapes using the dictionary (the code for this is commented out with an explanation as to what it did).
col = find_color(label, colorTool)
#print(f"{name} RGBA: {col[0]:.5f} {col[1]:.5f} {col[2]:.5f} {col[3]:.5f} defined={col[4]}")
cols[label.Tag()] = col
if set_colours != None:
colorTool.SetColor(label, set_colours[label.Tag()][5], XCAFDoc_ColorSurf)
As far as I see it the issue could be resolved either by getting open cascade to import Component3 and Component4 as Assemblies OR by finding a way to make SetShape() work as intended on subshapes.
Here's a link to the test file:
testcolours.step
I am trying to resize dataset and store new values using h5py package in python. My dataset size keeps increasing at every time instance, and I would like to append the .h5 file using the resize function. However, I run into errors using my approach. The variable dset is an array of datasets.
import os
import h5py
import numpy as np
path = './out.h5'
os.remove(path)
def create_h5py(path):
with h5py.File(path, "a") as hf:
grp = hf.create_group('left')
dset = []
dset.append(grp.create_dataset('voltage', (10**4,3), maxshape=(None,3), dtype='f', chunks=(10**4,3)))
dset.append(grp.create_dataset('current', (10**4,3), maxshape=(None,3), dtype='f', chunks=(10**4,3)))
return dset
if __name__ == '__main__':
dset = create_h5py(path)
for i in range(3):
if i == 0:
dset[0][:] = np.random.random(dset[0].shape)
dset[1][:] = np.random.random(dset[1].shape)
else:
dset[0].resize(dset[0].shape[0]+10**4, axis=0)
dset[0][-10**4:] = np.random.random((10**4,3))
dset[1].resize(dset[1].shape[0]+10**4, axis=0)
dset[1][-10**4:] = np.random.random((10**4,3))
EDIT
Thanks to tel I was able to solve this. Replace with h5py.File(path, "a") as hf: with hf = h5py.File(path, "a").
The problem
Not sure about the rest of your code, but you can't use the context manager pattern (ie with h5py.File(foo) as bar:) within a function that returns a dataset. As you point out in the comment under your question, this means that by the time you try to access the dataset the actual HDF5 file will have already closed. The dataset objects in h5py are like live views into the file, so they require the file remain open in order to use them. Thus, you're getting errors.
A solution
It's a good idea to always interact with files within a managed context (ie within a with clause). If your code throws an error, the context manager will (almost always) ensure that the file is closed. This helps avoid any potential losses of data resulting from a crash.
In your case, you can have your cake (encapsulate your dataset creation routines in a separate function) and eat it too (interact with the HDF5 file within a managed context) by writing your own context manager to look after the file for you.
It's actually pretty simple to code. Any Python object that implements the __enter__ and __exit__ methods is a valid context manager. Here's a complete working version:
import os
import h5py
import numpy as np
path = './out.h5'
try:
os.remove(path)
except OSError:
pass
class H5PYManager:
def __init__(self, path, method='a'):
self.hf = h5py.File(path, method)
def __enter__(self):
# when you call `with H5PYManager(foo) as bar`, the return of this method will be assigned to `bar`
return self.create_datasets()
def __exit__(self, type, value, traceback):
# this method gets called when you exit the `with` clause, including when an error is raised
self.hf.close()
def create_datasets(self):
grp = self.hf.create_group('left')
return [grp.create_dataset('voltage', (10**4,3), maxshape=(None,3), dtype='f', chunks=(10**4,3)),
grp.create_dataset('current', (10**4,3), maxshape=(None,3), dtype='f', chunks=(10**4,3))]
if __name__ == '__main__':
with H5PYManager(path) as dset:
for i in range(3):
if i == 0:
dset[0][:] = np.random.random(dset[0].shape)
dset[1][:] = np.random.random(dset[1].shape)
else:
dset[0].resize(dset[0].shape[0]+10**4, axis=0)
dset[0][-10**4:] = np.random.random((10**4,3))
dset[1].resize(dset[1].shape[0]+10**4, axis=0)
dset[1][-10**4:] = np.random.random((10**4,3))
#tel provided an elegant solution to the problem. I outlined a simpler approach in my comments below his answer. It is simpler for a beginner to code (and understand). Basically, it there a few minor changes to #Maxtron's original code. Modifications are:
move with h5py.File(path, "a") as hf: to __main__ routine
pass hf in create_h5py(hf)
I also added a test before os.remove() to avoid errors if the h5 file
doesn't exist
My suggested modifications below:
import h5py, os
import numpy as np
path = './out.h5'
# test existence of H5 file before deleting
if os.path.isfile(path):
os.remove(path)
def create_h5py(hf):
grp = hf.create_group('left')
dset = []
dset.append(grp.create_dataset('voltage', (10**4,3), maxshape=(None,3), dtype='f', chunks=(10**4,3)))
dset.append(grp.create_dataset('current', (10**4,3), maxshape=(None,3), dtype='f', chunks=(10**4,3)))
return dset
if __name__ == '__main__':
with h5py.File(path, "a") as hf:
dset = create_h5py(hf)
for i in range(3):
if i == 0:
dset[0][:] = np.random.random(dset[0].shape)
dset[1][:] = np.random.random(dset[1].shape)
else:
dset[0].resize(dset[0].shape[0]+10**4, axis=0)
dset[0][-10**4:] = np.random.random((10**4,3))
dset[1].resize(dset[1].shape[0]+10**4, axis=0)
dset[1][-10**4:] = np.random.random((10**4,3))
I'm working on a code to read and display the results of a Finite Element Analysis (FEA) calculation. The results are stored in several (relatively big) text files that contain a list of nodes (ID number, location in space) and lists for the physical fields of relevance (ID of node, value of the field on that point).
However, I have noticed that when I'm running a FEA case in the background and I try to run my code at the same time it returns errors, not always the same one and not always at the same iteration, all seemly at random and without any modification to the code or to the input files whatsoever, just by hitting the RUN button seconds apart between runs.
Example of the errors that I'm getting are:
keys[key] = np.round(np.asarray(keys[key]),7)
TypeError: can't multiply sequence by non-int of type 'float'
#-------------------------------------------------------------------------
triang = tri.Triangulation(x, y)
ValueError: x and y arrays must have a length of at least 3
#-------------------------------------------------------------------------
line = [float(n) for n in line]
ValueError: could not convert string to float: '0.1225471E'
In case you are curious, this is my code (keep in mind that it is not finished yet and that I'm a mechanical engineer, not a programmer). Any feedback on how to make it better is also appreciated:
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import numpy as np
import os
triangle_max_radius = 0.003
respath = 'C:/path'
fields = ['TEMPERATURE']
# Plot figure definition --------------------------------------------------------------------------------------
fig, ax1 = plt.subplots()
fig.subplots_adjust(left=0, right=1, bottom=0.04, top=0.99)
ax1.set_aspect('equal')
# -------------------------------------------------------------------------------------------------------------
# Read outputfiles --------------------------------------------------------------------------------------------
resfiles = [f for f in os.listdir(respath) if (os.path.isfile(os.path.join(respath,f)) and f[:3]=='csv')]
resfiles = [[f,int(f[4:])] for f in resfiles]
resfiles = sorted(resfiles,key=lambda x: (x[1]))
resfiles = [os.path.join(respath,f[:][0]).replace("\\","/") for f in resfiles]
# -------------------------------------------------------------------------------------------------------------
# Read data inside outputfile ---------------------------------------------------------------------------------
for result_file in resfiles:
keys = {}
keywords = []
with open(result_file, 'r') as res:
for line in res:
if line[0:2] == '##':
if len(line) >= 5:
line = line[:3] + line[7:]
line = line.replace(';',' ')
line = line.split()
if line:
if line[0] == '##':
if len(line) >= 3:
keywords.append(line[1])
keys[line[1]] = []
elif line[0] in keywords:
curr_key = line[0]
else:
line = [float(n) for n in line]
keys[curr_key].append(line)
for key in keys:
keys[key] = np.round(np.asarray(keys[key]),7)
for item in fields:
gob_temp = np.empty((0,4))
for node in keys[item]:
temp_coords, = np.where(node[0] == keys['COORDINATES'][:,0])
gob_temp_coords = [node[0], keys['COORDINATES'][temp_coords,1], keys['COORDINATES'][temp_coords,2], node[1]]
gob_temp = np.append(gob_temp,[gob_temp_coords],axis=0)
x = gob_temp[:,1]
y = gob_temp[:,2]
z = gob_temp[:,3]
triang = tri.Triangulation(x, y)
triangles = triang.triangles
xtri = x[triangles] - np.roll(x[triangles], 1, axis=1)
ytri = y[triangles] - np.roll(y[triangles], 1, axis=1)
maxi = np.max(np.sqrt(xtri**2 + ytri**2), axis=1)
triang.set_mask(maxi > triangle_max_radius)
ax1.tricontourf(triang,z,100,cmap='plasma')
ax1.triplot(triang,color="black",lw=0.2)
plt.show()
So back to the question, is it possible for the accuracy/performance of python to be affected by CPU load or any other 'external' factors? Or that's not an option and there's definitively something wrong with my code (which works well on other circumstances by the way)?
No, other processes only affect how often your process gets time slots to execute -- i.e., from a user's perspective, how quickly it completes its job.
If you're having errors under load, this means there are errors in your program's logic -- most probably, race conditions. They basically boil down to making assumptions about your environment that are no longer true when there's other activity in it. E.g.:
Your program is multithreaded, and the logic makes assumptions about which order threads are executed in. (This includes assumptions about how long some task would take to complete.)
Your program is using shared resources (files, streams etc) that other processes are also using at the same time. (E.g. some other program is in the process of (over)writing a file while you're trying to read it. Or, if you're reading from a stream, not all data are available yet.)
I have a set of traces in one folder Folder_Traces:
Trace1.npy
Trace2.npy
Trace3.npy
Trace4.npy
...
In my code, I must concatenate all traces and put them in one file.Each trace is a table. The big file where I put all my file is a table containing a set of tables. This file looks like this: All_Traces=[[Trace1],[Trace2],[Trace3],...[Tracen]]
import numpy as np
import matplotlib.pyplot as plt
sbox=( 0x63,0x7c,0x77,0x7b,0xf2,0x6b..........)
hw = [bin(x).count("1") for x in range(256)]
print (sbox)
print ([hw[s] for s in sbox])
# Start calculating template
# 1: load data
tempTraces = np.load(r'C:\\Users\\user\\2016.06.01-09.41.16_traces.npy')
tempPText = np.load(r'C:\\Users\\user\\2016.06.01-09.41.16_textin.npy')
tempKey = np.load(r'C:\\Users\\user\\2016.06.01-09.41.16_keylist.npy')
print (tempPText)
print (len(tempPText))
print (tempKey)
print (len(tempKey))
plt.plot(tempTraces[0])
plt.show()
tempSbox = [sbox[tempPText[i][0] ^ tempKey[i][0]] for i in range(len(tempPText))]
print (sorted(tempSbox))
So, what I need is to use all my trace files without concatenation, because concatenation causes many memory problems. So what I need is to change this line: tempTraces = np.load(r'C:\\Users\\user\\2016.06.01-09.41.16_traces.npy') by the path for my folder directly then load each trace and make the necessary analysis. So, How to resolve that please?
I created a simple csv file with numbers that approach pi and I would like to create and store the output as a png. I have a very simple csv, each tow contains the number I want to graph and
import pandas as pd
import csv
import matplotlib.pyplot as plt
from decimal import Decimal
def create_png():
df = pd.read_csv('sticks.csv', names=["xstk", "stk"])
sumdf = df.sum(0)
num1 = sumdf['xstk']
num2 = sumdf['stk']
total = num1 + num2
aproxpi = [(2*float(total))/num1]
with open('aproxpi.csv', 'a') as pifile:
piwriter = csv.writer(pifile, delimiter= ' ')
piwriter.writerow(aproxpi)
Piplot = pd.read_csv('aproxpi.csv', names=['~Pi'])
#Piplot.groupby('~Pi')
Piplot.plot(title='The Buffon Needle Experiment')
if __name__ == "__main__":
create_png()
When I run this code nothing happens. If I use the show method on the AxesSubPlot I raise an exception. How can this be accomplished?
You need to call plt.show() to actually see the plot.
This code seems very incomplete - is there more you can give us?
It may be that Piplot.plot needs to have x and y specified, instead of simply a title. I believe that you need to create a new plot object and pass the data into it, rather than calling data.plot() as you are now. See the documentation.
Additionally, taking a look at this question may help.