I am new to this forum, so this will be my first question ever (by having used the forum for several years now :D).
What's my Problem:
I am working in a Company now, where we want to automate processes like finding lowest and/or highest points/lines in classified 3d point cloud data (such as walls, roofs, ...). So I have a classified point cloud where I don't want to draw the lines myself of the lowest and highest points of walls or roofs or anythin, but figure out how python could do the job for me instead!
What I'd like to know:
To start, I'd like to know what is the best and proper way to process point cloud data using python? I came up with several ideas by simply google searching (such as laspy, open3d, ...) but I am very confused, which one might be the library I'd need for my mission or where I should really start to put effort in learning to deal with a certain package..
So, I am grateful for your answers and suggestions (maybe there exists a similar entry which I haven't found already?).
Thanks
Max
You might want to check out the Open3D Tutorials found here.
There isn't one that does exactly what you're looking for, but pretty dam close (IMO).
I'm not interested in doing what you're doing, but if I was this is where I'd figure it out.
Related
This is more of a 'what is this called' kind of question than a technical one. I have recently started playing with PyAutoGUI and I am using it to do some automation. In order to improve the speed of the overall function I am trying to narrow down the 'region' in which its looking. How would I identify a region by looking for a specific "border" ignoring the internal contents. I don't really need any code, unless your just that bored, just trying to learn what techniques are available to accomplish this task or maybe some helpful keywords that I can use in my search. I am having a very difficult time finding any resources that relate to my objective.
For example, how would I match the entire dimensions of the following picture regardless of what is inside the frame.
I'm working on a project to breakdown 3D models but I'm quite lost. I hope you can help me.
I'm getting a 3D model from Autodesk BIM and the format could be native or generic CAD formats (.stp, .igs, .x_t, .stl). Then, I need to "measure" somehow the maximum dimensions to model a raw material body, it will always have the shape of a huge panel. Once I get both bodies, I will get the difference to extract the solids I need to analyze; and, on each of these bodies, I need to extract the faces, and then the lines or curves of each face.
This sounds something really easy to do on a CAD software, but the idea is to automate this process. I was looking into openSCAD, but seems that works only to model geometry and it doesn't handle well imported solids. I'm leaving a picture with the idea of what I need to do in the link below.
So, Any idea how can I do this? which langue and library can help in this project?
I can see this automation possible with a few in between steps:
OpenSCAD can handle differences well, so your "Extract Bodies" seems plausible
1.5 Before going further, you'll have to explain how you "filtered out" the cylinder. Will you do this manually? If you don't, you will have it considered for analysis and have a lot of faces as a result.
I don't think openSCAD provides you a vertex array. However, it can save to .STL, which is kinda easy to parse with the programming language of your choice, you'll have to study .stl file structure a bit (this sounds much more frightening than it is - if you open an stl with an editor you will probably immediately realize what's happening).
Since you've parsed the file, you can now calculate lines with high school math.
This is not an easy, GUI way to do what you ask, but if you have a few skills you'll have your automation, and depending on the amount of your projects it may be worth it.
I have been working in this project, and foundt the library "trimesh" is better to solve this concern. Give it a shot, and save some time.
I am a novice at python so I apologize if this is confusing. I am trying to create a 6 variable venn diagram. I was trying to use matplotlib-venn, however the problem I am having is creating the sets is turning out to be impossible for me. My data is thousands of rows long with a unique index and each column has boolean values for each category. It looks something like this:
|A|B|C|D|E|F|
|0|0|1|0|1|1|
|1|1|0|0|0|0|
|0|0|0|1|0|0|
Ideally I'd like to make a venn diagram which would show that these # of people overlap with category A and B and C. How would I go about doing this? If anyone would be able to point me in the right direction, I'd be really grateful.
I found this person had a similiar problem with me and his solution at the end of that forum is what I'd like to end up at except with 6 variables: https://community.plotly.com/t/how-to-visualize-3-columns-with-boolean-values/36181/4
Thank you for any help!
Perhaps you might try to be more specific about your needs and what you have tried.
Making a six-set Venn diagram is not trivial at all, ever more so if you want to make the areas proportional. I made a program in C++ (nVenn) with a translation to R (nVennR) that can do that. I suppose it might be used from python, but I have never tried and I do not know if that is what you want. Also, interpreting six-set Venn diagrams is not easy, you may want to check upSet for a different kind of representation. In the meantime, I can point you to a web page I made that explains how nVenn works (link).
I'm working on a project where I have to match one video sequence with the other. The actions and motions of the two videos are similar, since the video that I'm matching with is doing the exact same movement as the other one. And currently, I'm leaning towards using dynamic time warping(DTW) to align those two videos, but I have some trouble coming up with an idea to do so. So I'm wondering if you guys have any source code that I can work with or any ideas on how to dissect this problem. Thank you
a starting point would be to understand DTW https://www.cs.unm.edu/~mueen/DTW.pdf
I would like to code a script that could locate a specific word or number in a financial statement. Financial statements roughly contain the same information, they are however not identical and organized in the same way. My thought is that by using Tensorflow I could train a neural network to locate the specific words or numbers for me. I am thinking that if I label different text and numbers in 1000 financial statements and use them to train the neural network, it will then be able to identify these numbers or words in all financial statements. For example, tell it in all 1000 training statements which number that is the profit of the company.
Is this doable? I have been working with coding in python for a couple of months and so far I've built some web scrapers and integrated them with twitter, slack and google sheets. I would be very grateful for all your thoughts on this project and if anyone could steer me in the right direction by sharing relevant tutorials.
Thanks a lot!
Great thing that you're getting started, I believe before thinking about the actual implementation using tensorflow or any other library, you should first try to understand the problem in regards with the basic domain of the problem itself.
I'm not really sure what are you exactly trying to achieve but to a rough idea I'm guessing it's about trying to find is a statement turns out to be a benificial to the company or not, something like of semantic analysis type of problem.
So I strongly believe that, first you should try to learn the various methodologies related to semantic analysis and find the most appropriate technique.
In short theory/understanding before the actual code.
Finally i would suggest you ask such theoratical questions on stack exchange of AI, here in SO we generally deal with code or something that of intermediate to code.
I hope that makes sense? ;)
drop a comment if any doubts.