Having issues with AI Feynman Symbolic Regression Algorithm - python

I'm currently using AI Feynman to try to create an expression for some data we've taken. However, I'm having a couple issues during the process and was wondering if anybody in the community has any advice. I've looked around but there doesn't seem to be much documentation around the internet.
I'm having two problems. The first has to do with units. I've searched all over, but I can't seem to find how to format units so that AI Feynman can perform dimensional analysis. If anybody could provide insight it would be much appreciated.
My main issue has to do with NN issues, however. When running in a jupyter notebook, AI Feynman can get through the brute force stage, but as soon as it says "Training a NN on the data...", the kernel dies. I've tried setting up GPUs manually, but that didn't work. Does anybody know how to solve these issues?

Related

Python code optimization - traceback_utils error_handler

I am trying to speed up some python code. It's a fairly complex system, including a loop running inference of a Convolutional Neural network (Tensorflow 2) and subsequently handle its results in a reconstruction algorithm. The NN running on GPU is fast enough, but the subsequent python code can't catch up. So I profiled the code and found the hot-path to include a lot of error-handling stuff it seems:
In a production setup I'd love to get rid of this for performance. Is there any way I can achieve this in python? It seems there is some back-tracing/error-handling involved in each function call. Is that correct? If so, in C++ I'd probably try inlining functions as a remedy. Are there similar strategies in python? I'd appreciate any general advice.
Note: No minimum working example or code, as it's a fairly complex setup and I figure the question is not very specific to my code. But I can share the code and provide more detailed information if people think it'd be helpful.

Hand recognition in Python without precise landmarks

Is there any mediapipe module that detects hands WITHOUT detecting their pose?
The reason is that the examples I find on the internet end up running slow on my computer, and I don't need to know the position of the fingers, just the hand.
I tried to google it but all the videos/tutorials I find are the same code (which detects each landmark in hand). I'm not much in the ML area and I don't know if there isn't a ready-made model for it or if I didn't know the correct terms to search.
As an addendum if anyone knows some way to use GPU acceleration on Windows it would also work, as I believe it would improve FPS. Everything I found said that this is only possible on Linux, so I gave up and thought about looking for a simpler model that consumes less CPU.

scipy.integrate.odeint fails depending on time steps

I use python for scientific applications, esp. for solving differential equations. I´ve already used the odeint function successfully on simple equation systems.
Right now my goal is to solve a rather complex system of over 300 equations. At this point the odeint functions give me reasonable results as long as the time steps in the t-array are equal or smaller than 1e-3. But I need bigger time steps since the system has to be integrated over several thousand seconds. Bigger time steps yield the "excess work done.." error.
Does anyone have experience with odeint and can tell me, why this is the case, although the odeint function seems to choose its time steps automatically and then displays the results that match the time steps given by me?
I simply don´t understand why this happens. I think I can work around the problem by integrating multiple times, but maybe someone knows a better solution. I apologize in advance in the case there already is a solution elsewhere and I haven´t seen it.

How to train a model in C++ with tensorflow?

I tried to trained a experiment with deep learning model.
I found that tensorflow is the best way to do this.
But there is problem that tensorflow need to be writen in python.
And my program contain many loops.Like this..
for i=1~2000
for j=1~2000
I know this is a big drawback for python.
It's very slow than c.
I know tensorfow has a C++ API, but it's not clear.
https://www.tensorflow.org/api_docs/cc/index.html
(This is the worst Specification I have ever looked)
Can someone give me an easy example in that?
All I need is two simple code.
One is how to create a graph.
The other is how to load this graph and run it.
I really eager need this.Hope someone can help me out.
It's not so easy, but it is possible.
First, you need to create tensorflow graph in python and save it in file.
This article may help you
https://medium.com/jim-fleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f#.krslipabt
Second, you need to compile libtensorflow, link it to your program (you need tensorflow headers as well, so it's a bit tricky) and load the graph from the file.
This article may help you this time
https://medium.com/jim-fleming/loading-tensorflow-graphs-via-host-languages-be10fd81876f#.p9s69rn7u

Use machine learning for simple robot control

I'd like to improve my little robot with machine learning.
Up to now it uses simple while and if then decisions in its main function to act as a lawn mowing robot.
My idea is to use SKLearn for that purpose.
Please help me to find the right first steps.
i have a few sensors that tell about the world otside:
World ={yaw, pan, tilt, distance_to_front_obstacle, ground_color}
I have a state vector
State = {left_motor, right_motor, cutter_motor}
that controls the 3 actors of the robot.
I'd like to build a dataset of input and output values to teach sklearn the wished behaviour, after that the input values should give the correct output values for the actors.
One example: if the motors are on and the robot should move forward but the distance meter tells constant values, the robot seems to be blocked. Now it should decide to draw back and turn and move to another direction.
First of all, do you think that this is possible with sklearn and second how should i start?
My (simple) robot control code is here: http://github.com/bgewehr/RPiMower
Please help me with the first steps!
I would suggest to use Reinforcement Learning. Here you have a tutorial of Q-Learning that fits well into your problem.
If you want code in python, right now I think there is no implementation of Q-learning in scikit-learn. However, I can give you some examples of code in python that you could use: 1, 2 and 3.
Also please have in mind that reinforcement learning is set to maximize the sum of all future rewards. You have to focus on the general view.
Good luck :-)
The sklearn package contains a lot of useful tools for machine learning so I dont think thats a problem. If it is, then there are definitely other useful python packages. I think collecting data for the supervised learning phase will be the challenging part, and wonder if it would be smart to make a track with tape within a grid system. That would make it be easier to translate the track to labels (x,y positions in the grid). Each cell in the grid should be small if you want to make complex tracks later on I think. It may be very smart to check how they did in the self-driving google car.

Categories

Resources