I have been working on this code for a project at work which will (hopefully) take in images from a scanning electron microscope and generate 3D STL files of the structures were imaging. I'm at the stage with the code where I'm trying to generate a 3D structure from a 'coloured in' binary image I've made with some edge detection code I wrote. I came across this post How can i extrude a stl with python that basically does exactly what I need (generating a meshed 3D structure from a binary image). I've tried using/adapting the code in the answer to that post (see below) but I keep running into the following error: polyline2 = mr.distanceMapTo2DIsoPolyline(dm.value(), isoValue=127) RuntimeError: Bad expected access. I cant find anything online about why this is happening and I'm no expert in Python so have no idea myself. If anyone has an idea, I'd really appreciate it!
Code from answer to above post:
import meshlib.mrmeshpy as mr
# load image as Distance Map object:
dm = mr.loadDistanceMapFromImage(mr.Path("your-image.png"), 0)
# find boundary contour of the letter:
polyline2 = mr.distanceMapTo2DIsoPolyline(dm.value(), isoValue=127)
# triangulate the contour
mesh = mr.triangulateContours(polyline2.contours2())
# extrude itself:
mr.addBaseToPlanarMesh(mesh, zOffset=30)
# export the result:
mr.saveMesh(mesh, mr.Path("output-mesh.stl"))
I have tried the following:
Reconfigured the MeshLib package that this command uses. Package docs here: https://meshinspector.github.io/MeshLib/html/index.html#PythonIntegration
Updating VS studio/python/MeshLib
In older version of meshlib python module RuntimeError: Bad expected access indicated that mr.loadDistanceMapFromImage had failed, you should had checked it like this:
import meshlib.mrmeshpy as mr
# load image as Distance Map object:
dm = mr.loadDistanceMapFromImage(mr.Path("your-image.png"), 0)
# check dm
if ( not dm.has_value() ):
raise Exception(dm.error())
# find boundary contour of the letter:
polyline2 = mr.distanceMapTo2DIsoPolyline(dm.value(), isoValue=127)
# triangulate the contour
mesh = mr.triangulateContours(polyline2.contours2())
# extrude itself:
mr.addBaseToPlanarMesh(mesh, zOffset=30)
# export the result:
mr.saveMesh(mesh, mr.Path("output-mesh.stl"))
But in actual release your code will rise exception with real error.
Please make sure that path is correct, if it doesn't help please provide more info like png file and version of python and version of MeshLib and anything else you find related.
P.S. If there is real problem with MeshLib better open issue in github.
Related
So this GIF looks perfectly fine before opening:
But, when opened using Pillow using
imageObject = Image.open(path.join(petGifs, f"{pokemonName}.gif"))
it bugs out, adding various boxes that have colors similar to that of the source image. This is an example frame, but almost every frame is different, and it's in different spots depending on the GIF:
The only thing, that has worked to fix this, is ezgif's unoptimize option (found in their optimize page). But, I'd need to do that on each GIF, and there's a lot of them.
I need either a way to bulk unoptimize, or a new way to open the GIF in Python (currently using Pillow), that will handle this.
At least for extracting proper single frames there might be a solution.
The disposal method for all frames (except the first) is set to 2, which is "restore to background color".
Diving through Pillow's source code, you'll find the according line where the disposal method 2 is considered, and, in the following, you'll find:
# by convention, attempt to use transparency first
color = (
frame_transparency
if frame_transparency is not None
else self.info.get("background", 0)
)
self.dispose = Image.core.fill("P", dispose_size, color)
If you check the faulty frames, you'll notice that this dark green color of the unwanted boxes is located at position 0 of the palette. So, it seems, the wrong color is picked for the disposal, because – I don't know why, yet – the above else case is picked instead of using the transparency information – which would be there!
So, let's just override the possibly faulty stuff:
from PIL import Image, ImageSequence
# Open GIF
gif = Image.open('223vK.gif')
# Initialize list of extracted frames
frames = []
for frame in ImageSequence.Iterator(gif):
# If dispose is set, and color is set to 0, use transparency information
if frame.dispose is not None and frame.dispose[0] == 0:
frame.dispose = Image.core.fill('P', frame.dispose.size,
frame.info['transparency'])
# Convert frame to RGBA
frames.append(frame.convert('RGBA'))
# Visualization overhead
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 8))
for i, f in enumerate(frames, start=1):
plt.subplot(8, 8, i), plt.imshow(f), plt.axis('off')
plt.tight_layout(), plt.show()
The extracted frames look like this:
That seems fine to me.
If, by chance, the transparency information is actually set to 0, no harm should be done here, since we (re)set with the still correct transparency information.
I don't know, if (re)saving to GIF will work, since frames are now in RGBA mode, and saving to GIF from there is tricky as well.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.19041-SP0
Python: 3.9.1
PyCharm: 2021.1.3
Matplotlib: 3.4.2
Pillow: 8.3.1
----------------------------------------
You can try to use:
from PIL import Image, ImageSequence
im = Image.open(f"{pokemonName}.gif")
index = 1
for frame in ImageSequence.Iterator(im):
frame.save("frame%d.png" % index)
index += 1
I've found a solution that I like for unoptimizing gifs which might be of use to you.
It uses the gifsicle library, which is a command line tool for working with gifs. Crucially, gifsicle lets you unoptimize gifs like yours (I think the specific name of the optimization in your gif is "cumulative layers").
Once you install it with your package manager of choice, you can either call it within your code via Python's subprocess library, or use it yourself from the command line.
You specifically mentioned a way to bulk unoptimize, and you can do that very easily with gifsicle via something like:
gifsicle -U -b *.gif
This will overwrite every gif in the working directory with an unoptimized version simultaneously. If you want to keep optimized copies make backups. See the manual page for more info about how to use gifsicle.
Once the gif is unoptimized python should be able to open it normally.
MWE
To generate PlantUML diagrams in (sub)folder: /Diagrams/ I use the following python script:
from plantuml import PlantUML
import os
from os.path import abspath
from shutil import copyfile
os.environ['PLANTUML_LIMIT_SIZE'] = str(4096 * 4) # set max with to 4 times the default (16,384)
server = PlantUML(url='http://www.plantuml.com/plantuml/img/',
basic_auth={},
form_auth={}, http_opts={}, request_opts={})
diagram_dir = "./Diagrams"
#directory = os.fsencode()
for file in os.listdir(diagram_dir):
filename = os.fsdecode(file)
if filename.endswith(".txt"):
server.processes_file(abspath(f'./Diagrams/{filename}'))
It is used to generate for example the following test.txt file:
#startuml
'Enforce straight lines
skinparam linetype ortho
' Set direction of graph hierarchy
Left to Right direction
' create work package data
rectangle "something something something" as ffd0
rectangle "something something something" as ffd1
rectangle "something something something something something" as ffd2
rectangle "something something something something" as ffd3
rectangle "something something somethingsomethingsomething" as ffd4
rectangle "something something something something something something" as ffd5
rectangle "something something something something" as ffd6
rectangle "something something something " as ffd7
' Implement graph hierarchy
ffd0-->ffd1
ffd1-->ffd2
ffd2-->ffd3
ffd3-->ffd4
ffd4-->ffd5
ffd5-->ffd6
ffd6-->ffd7
#enduml
Expected behavior
Because I set the PLANTUML_LIMIT_SIZE variable to 16384 (pixels) as the FAQ suggests, I would expect this to fill up the picture of the diagram with all the blocks connected side by side up to a max width of 4096 * 4 pixels.
To test whether perhaps setting it from the python script was implemented incorrectly I also tried to set it manually with: set PLANTUML_LIMIT_SIZE=16384 to expect the same behavior as explained in the above paragraph (a picture filled up till 16384 pixels).
Observed behavior
Instead PlantUML cuts off the picture at 2000 horizontal pictures as shown in the figure below:
Question
How can I ensure the PlantUML does not cut off the blocks of the diagrams of n pixels (height or width), from a python script?
The best way I've found to prevent diagrams from being cut off, without trying to guess at the size or picking some arbitrarily large limit, is to select SVG output.
Note that setting PLANTUML_LIMIT_SIZE is only going to have an effect if you're running PlantUML locally, but it appears the Python interface you're using sends the diagram to the online service. I don't know the internals of that interface, but per the documentation you should be able to get SVG output by using http://www.plantuml.com/plantuml/svg/ as the service URL.
If you need the final image in PNG format, you will need to convert it with another tool.
Approach 1:
To prevent the diagram from being cut off I followed the following steps:
Downloaded the plantuml.jar from this location http://sourceforge.net/projects/plantuml/files/plantuml.jar/download
Put the diagram which I wrote in a someLargeDiagram.txt file, in the same directory as the plantuml.jar file.
Opened terminal on Ubuntu 20.04 in that same directory and ran:
java -jar plantuml.jar -verbose someLargeDiagram.txt
That successfully generated the diagram as .png file, which was not cut off.
Approach 2:
After creating even larger graphs, they got cut-off again, and it gave the message to increase the PLANTUML_LIMIT_SIZE. I tried passing the size as an argument in the commandline using: java -jar plantuml.jar -verbose -PLANTUML_LIMIT_SIZE=8192 Diagrams/latest.uml however that did not work, nor did ..-PLANTUML_LIMIT_SIZE 8192... This link suggested one could set it as an environment variable, so I did that in Ubuntu 20.04 using command: export PLANTUML_LIMIT_SIZE 8192, after which I successfully created a larger diagram that was not cut-off with command:
java -jar plantuml.jar -verbose Diagrams/latest.uml
I have two sets of matching points, eg.
# first set of points
[[696.0, 971.3333333333334], [1103.3333333333333, 934.6666666666666], ...]
# second set of points
[[475.0, 458.6666666666667], [1531.3333333333333, 524.0], ...]
from two images. Right now I'm using this piece of code to align images:
points_source = np.array(source_coordinates)
points_destination = np.array(destination_coordinates)
h, status = cv2.findHomography(points_destination, points_source, cv2.RANSAC)
aligned_image = cv2.warpPerspective(destination_image, h, (source_image.shape[1], source_image.shape[0]))
It works well most of the time, but sometimes it warps image and it aligns bad. I found estimateRigidTransform function, that'd be the best for me, because it's possible to only translate and rotate the image, but it's deprecated and when I try to use it, it throws an error:
Traceback (most recent call last):
File "align.py", line 139, in <module>
align(image, image2, source_coordinates, destination_coordinates)
File "align.py", line 111, in align
m = cv2.estimateRigidTransform(points_destination, points_source, fullAffine=False)
AttributeError: module 'cv2' has no attribute 'estimateRigidTransform'
I couldn't find any other solution than estimateRigidTransform. Is there any other function that'd work for me? Maybe I can use warpPerspective to only change rotation and translation? I don't want to use getAffineTransform function because it can accept only three points and I want to use much more points. My OpenCV version is 4.0.1-1
The function I needed is: cv2.estimateAffinePartial2D()
Instead of using plain OpenCV, I would recommend to link your project with another library that has the algorithms you are looking (for and much more). Probably the best solution would be Insight Toolkit library (ITK) or Visual Toolkit (VTK). The former is much complex and also much harder to learn, but the latter is actually very simple. They both use CMake and there is no problem in compiling/linking etc.
ITK is especially designed for image processing. It includes so called Landmark based registration, which is exactly what you need. Complete working example is available. Unfortunately, the library seems very complex at the beginning.
On the other hand, VTK also implements the same algorithm, but it can be used very simply (From the example):
vtkSmartPointer<vtkLandmarkTransform> landmarkTransform = vtkSmartPointer<vtkLandmarkTransform>::New();
landmarkTransform->SetSourceLandmarks(sourcePoints);
landmarkTransform->SetTargetLandmarks(targetPoints);
landmarkTransform->SetModeToRigidBody();
landmarkTransform->Update();
std::cout << landmarkTransform->GetMatrix() << std::endl;
I started working with R package called Bio3D
(http://thegrantlab.org/bio3d/index.php)
and encountered a problem during reproducing examples from "Protein Structure Networks with Bio3D" tutorial
(http://thegrantlab.org/bio3d/tutorials/protein-structure-networks).
Here is the fragment I am trying to do:
"
The code snippet below first sets the file paths for the example HIVpr starting structure (pdbfile) and trajectory data (dcdfile), then reads these files (producing the objects dcd and pdb).
dcdfile <- system.file("examples/hivp.dcd", package = "bio3d")
pdbfile <- system.file("examples/hivp.pdb", package = "bio3d")
# Read MD data
dcd <- read.dcd(dcdfile)
pdb <- read.pdb(pdbfile)
inds <- atom.select(pdb, resno = c(24:27, 85:90), elety = "CA")
trj <- fit.xyz(fixed = pdb$xyz, mobile = dcd,
fixed.inds = inds$xyz, mobile.inds = inds$xyz)
Once we have the superposed trajectory frames we can asses the extent to which the atomic fluctuations of individual residues (in this very short example simulation) are correlated with one another and build a network from this data:
cij <- dccm(trj)
net <- cna(cij)
plot(net, pdb)
"
And till this moment everything works well.
# View the correlations in pymol
view.dccm(cij, pdb, launch = FALSE)
Here I open generated pdb file corr.inpcrd with pymol.
But instead of nice cartoon 3D model I see just aminoacid residues represented by dots.
Tried to solve the problem with pymol using settings for cartoons, ribbons, colors, transparency and command show but it changed nothing.
Would be grateful for your suggestions!
I have not enough reputation to illustrate expected and obtained outcome with images but probably I will be able to send them directly if necessary.
Thank you!
Typically this will work if pymol is in your path for executables (see here: http://tinyurl.com/lzhpz3w for more about where bio3d expects to find pymol and muscle).
view.dccm(cij, pdb, launch = FALSE)
I don't use windows myself but if you post this question on the bio3d bitbucket issues page https://bitbucket.org/Grantlab/bio3d/issues you will get help from experienced windows bio3d users including the author of this function.
Try setting launch=TRUE in your call to the view.dccm() function to have both PDB and pymol script loaded for you.
I am trying to code in python opencv-2.4.3, It is giving me an error as below
Traceback (most recent call last):
File "/home/OpenCV-2.4.3/cam_try.py", line 6, in <module>
cv2.imshow('video test',im)
error: /home/OpenCV-2.4.3/modules/core/src/array.cpp:2482: error: (-206) Unrecognized or unsupported array type in function cvGetMat
I am not understanding what does that mean, Can anybody help me out?
Thankyou.
The relevant snippet of the error message is Unrecognized or unsupported array type in function cvGetMat. The cvGetMat() function converts arrays into a Mat. A Mat is the matrix data type that OpenCV uses in the world of C/C++ (Note: the Python OpenCV interface you are utilizing uses Numpy arrays, which are then converted behind the scenes into Mat arrays). With that background in mind, the problem appears to be that that the array im you're passing to cv2.imshow() is poorly formed. Two ideas:
This could be caused by quirky behavior on your webcam... on some cameras null frames are returned from time to time. Before you pass the im array to imshow(), try ensuring that it is not null.
If the error occurs on every frame, then eliminate some of the processing that you are doing and call cv2.imshow() immediately after you grab the frame from the webcam. If that still doesn't work, then you'll know it's a problem with your webcam. Else, add back your processing line by line until you isolate the problem. For example, start with this:
while True:
# Grab frame from webcam
retVal, image = capture.read(); # note: ignore retVal
# faces = cascade.detectMultiScale(image, scaleFactor=1.2, minNeighbors=2, minSize=(100,100),flags=cv.CV_HAAR_DO_CANNY_PRUNING);
# Draw rectangles on image, and then show it
# for (x,y,w,h) in faces:
# cv2.rectangle(image, (x,y), (x+w,y+h), 255)
cv2.imshow("Video", image)
i += 1;
source: Related Question: OpenCV C++ Video Capture does not seem to work
I was having the same error, and after about an hour of searching for the error, I found the path to the image to be improperly defined. It solved my problem, may be it will solve yours.
I solved the porblem by using a BGR-picture. the one from my cam was YUYV by default!
I am working in Windows with Opencv 2.3.1 and Python 2.7.2, so, I had the same problem, I solved it pasting the following DLL files: opencv_ffmpeg.dll and opencv_ffmpeg_64.dll in the installation folder of Python. Maybe it help you with a similar solution in Ubuntu.
For me, like Gab Hum did, I copied opencv_ffmpeg245.dll to my python code folder. Then it works.
Check your Image Array (or NpArray),(by printing it) whether you are trying to pass an array of images at one shot instead of passing each image at once.
A single image array would look like :
[[[ 76 85 103] ... [ 76 85 103]], ... ]
Rows encloses each columns, each matrix(pixes) encloses no of rows, each image comprises of matrices (pixels).
It is always good to have a sanity check, to be sure your camera is working.
In my case my camera is
raspistill -o test.jpg