Suppose there is a geo region X. The celestial bodies move over that region over the year and, of course, the bodies do not remain the same or in the same position. I am trying to build a 2/3-D chart that maps the movement of the bodies over X (and given a certain time and place within X, show the bodies and their location at that time and place). I plan to do this using Python but at the same time lack knowledge of astronomy - Can I do it? Any pointers/modules/tutorials would help. Thanks.
As #postoronnim said, the astropy package provides you with everything you need for this task.
You can go here and you will have a working example.
Just a quick summary:
You can give a location for the observation (the main observatories in the planet are already available in the package but you can define your own with latitude, longitude and elevation).
Then you need coordinates of one object and the moment of the observation and you can plot a 2D (or 3D if you want to play with spherical coordinates) trajectory of you object in the sky. It is in genetal very usefull to plot Alt vs time to visualize when your object is visible.
Hope this helped
I would suggest you to have a glance at the opensource astronomy package stellarium with which you could simulate the sky for a given location for a given body. There should also be a documentation that accompanies that which could be helpful in getting yourself familiarised with the adopted algorithms.
Related
I intend to make a 3D model based on multi view stereo images ( basically 2D plane images of the same object from different angles and orientation) inside Blender from scratch.However, I am new to Blender.
I wanted to know if there are any tutorials of how to project a single pixel or point in the space of Blender's 3D environment using python. If not tutorial, any documentation. I am still learning about this whole 3D construction thing and pretty new to this, so I am not sure maybe these points are displayed using a 3 dimensional matrix/array ?
Basically I want to implement 3D construction based on a paper written by some researchers. Mostly every such project is in C++. I want to do it in Python in Blender, and if I am capable enough, make these libraries open source.
Suggest me any pre-requisite if you think that shall help me. I have just started my 3rd year of BSc Computer Science course, and very new to the world of Computer Graphics.
(My skillset is C, Java and Python.)
I would be very glad and appreciate any help.
Thank You
[Link to websitehttps://vision.in.tum.de/research/image-based_3d_reconstruction/multiviewreconstruction[][1]]
image2
Yes, it can very likely be done in Blender, and in Python at least for small geometries / low resolution.
A valid approach for the kind of scenarios you seem to want to play with is based on the idea of "space carving" or "silhouette projection". A good description in is an old paper by Kutulakos and Seitz, which was based in part on earlier work by Szelisky.
Given a good estimation of the silhouettes, these methods can correctly reconstruct all convex portions of the object's surface, and the subset of concavities that are resolved in the photo hull. The remaining concavities are "patched" over and need to be reconstructed using a different method (e.g. stereo, or structured light). For the surfaces that can be reconstructed, space carving is generally more robust than stereo (since it is insensitive to the color and surface texture of the object), and can work on surfaces where structured light struggles (e.g. surfaces with specularities, or very dark objects with low reflectance for a laser stripe)
The basic idea is to use the silhouettes of the projection of the object in cameras around it to "remove" mass from an initial volume (e.g. a box) encompassing the object, a bit like a sculptor carving a statue by removing material from a block of marble.
Computationally, you can do it representing the volume of space of interest using an octree, initialized with a minimal level of subdivision, and then progressively refined. The refinement consists of projecting the vertices of the octree leaves in the cameras, and identifying which leaves are completely outside or partially inside the silhouettes. The former are pruned, while the latter are split, and the process continues until no more leaves can be split or a maximul level of subdivision is reached. The hull of the octree is then extracted as a "watertight" mesh using standard methods.
Apart from the above paper, a way more detailed description can be found on an old patent by Geometrix - it sold a scanner based on the above ideas around year 2000. Here is what it looked like:
I have a series of GPS points which collectively form a polyline. Each of these GPS points has a time stamp and I can therefore compute things like journey time and average speed along the poly line.
I now wish to map the resulting polyline onto a road network. However, for obvious reasons the GPS points don't line up with the actual infrastructure and I must attempt to match them across. Is there a python library for doing this?
Check out pyproj, geopandas, and rtree.
I am using OpenCV and Python.
Let say I have this sequence video of the car. And I have tracked some 'interesting points' of the car with the cv2.goodFeaturesToTrack and cv2.calcOpticalFlowPyrLK. Now, given the traced points, I want to estimate a very rough shape (maybe a 3D box) of the car and its distance from the camera. It doesn't need to be that accurate.
On top of that, I want it to be keep updating in real time. The closest youtube video I can find that can give a view of what I am trying to achieve is this. I have found a new Structure from Motion module in OpenCV, but it is more on building a 3D model from a collection of points.
The question is, what is the best way of achieving this and what kind of library I can use (especially in order to construct the 3D space)?
And it is also OK if somehow I need to use C++ for this (although I am still not good in it yet).
Thanks.
i'm developing a Gtk Program with Python. I have to display a map and some nodes on this map, which walking around on streets. To achieve this I am using libchamplain.
Displaying the map was quite easy. But is there a way to check if a Coordinate (lat, lon) points on a street? Or any other solution to put some walking markers on the map?
Thank you.
I solved my problem.
I created a PathLayer and calculated a very, very simple route for a randomly chosen Point on the Map based on this algorithm. Then i use Object to print that Point on the route. Every second the Point removes and get plotted on the next position.
I have a list of[ {'latitude' : latitude, 'longitude' : longitude}] data and I am looking for a python based library that I can use to analyse this and tell me the percentage of path that is a pure straight line
Example of such a path is here : http://gyazo.com/e65a4ecf43161bdfd126316f39c4d403
Thanks in advance
Update
I have attached a picture of the route on the map I am looking at. basically , it is the path of running of a person that is around 4 kms long. As seen on the map, the path of this "run" is very much a straight line, and that over 4 kms is impossible in the center of a city (which is where this run has occured), leading to the conclusion that this was done using a transport (underground metro).
The algorithm I want to find is to detect such a finding, that is clearly evident to the naked eye - programmatically