How can I run python code on Mobile devices - python

I have developed one Machine learning model in Python. I want to run it in mobile device.
This model require Xgboost machine learning algorithm few signal processing library to extract signal features.
I don't want to perform training on mobile, but just testing the model.
What I tried so far
ML Kit - This is Google service. But issue is it usage tensorflow. No support for Xgboost
Core ML - Specifically for iOS. But signal processing support is not available
Treelite - We can convert the model into C. But C code does not have feature extraction. I have tried to do feature extraction in C, Java but required signal processing packages I could not find or implement in.
Checked out various other links and articles but no support.
If any possible way to run python packages directly on mobile device, that can save my life.

I would suggest you Pythonista.
It works fine and there is almost everything what you want. You can't really use every module but still many and it cost 10$...
Sadly it's only available for iOS:
Click here to open it from the App Store.
Click here to take a look for the official page.
Here are some pros and cons:
Pros:
Syntax highligthing
many modules (Probably even the full library, but I haven't tried that yet)
Example Files (How to use the sensors, etc.)
Official docs: http://omz-software.com/pythonista/docs/
(half-) full Autocomplete
Cons:
- not very suitable on small screens
PS:
If I missed something: Feel free to edit it :)

You can try xgboost-predictor-java, Just adding the dependencies in android studio.
It takes your model as an input and uses for prediction.

Related

Can I use OpenGL on a local Wamp server?

My project consists of a website where a user inputs a Music XML file and receives a video (similar to synthesia) based on that XML file. I am using python to parse the XML file and get all the useful information. With that information, I am using PyOpenGL with glut to create animations and OpenCV to save each frame to a video.
I am able to run the program locally and it works. Now I am trying to use the program within my Wamp Server. So my question is, how would I go about doing this? My plan was to call the program with PHPs shell_exec() but nothing seems to happen. I've tested shell_exec() on simple test files that returns a string and that works. I have done some research and found I can use xvfb for headless server rendering. Any idea of how I can implement this with PyOpenGL/Glut? Also is it ok to use PHPs shell_exec() or should I be using something else to call my Python program?
First and foremost decide upon you need/want GPU acceleration or not. There's little use in trying to hone out a GPU accelerated OpenGL context creation, if your target system doesn't even have a GPU.
Next you should come to terms with, that you'll no longer be able to use GLUT, because GLUT was implemented with creating on-screen windows in mind.
If you can live without GPUs and rely on software rasterization, you should look into OSMesa https://mesa3d.org/osmesa.html
If you need GPU acceleration, check what GPU you'll be running on. If it's going to be a NVidia one, check out their excellent blog on how to create a headless rendering context with EGL https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/
If it's an AMD or Intel GPU, then EGL in theory should work as well. However using DRM+GBM will usually yield better results. There's an example project for that to be found at https://github.com/eduble/gl

XCode or Visual Studio ? Building Human Motion detection iOS app using iPhone camera

I am a Bioengineer and need to build a Motion Capture Validation app on my iPhone 8. My goal is have my app capture the motion of a human's full body and analyse it to confirm if the person performed well the movement it should have done or not. The app flow is very simple: just have a “Welcome” page and then go onto the Exercise page, where the person will have its mouvement recorded. Mouvements are in 2D and very simple: for example, front-facing human raising straight leg to the side by 45 degrees.
To build the app for iOS, I know the only way is to code it in Swift, so I am considering using XCode to do so. However, my Motion Capture Analysis section will likely be in Python. If I understand correctly, XCode does not go well with Python. Is there a way to make XCode 11 working for Python? Or should I use Visual Studio Code?
How my motion capture analysis will work:
1. My app will have access to the camera of my iPhone (I do not care which one, does not matter).
2. The app will not record a video but take pictures (3 per second at least) from the camera's visual.
3. The image will be called by my Python code.
4. The code will transform it into a skeletal model.
5. The skeletal model obtained will be compared to a "good mouvement" skeletal model.
6. The python code will send a Yes/No validation message to the app.
5. The app will display a green tick to say the mouvement was well performed, and a red cross if not.
So, as you can see, when my app runs, it will be asking the Swift code (skeleton of my app) and the Python code (motion analysis of the person in the camera) to run together. A Python file will call the images from the “App”. And the Swift code will call my Python code to get a YES/NO answer. I do not know if that is something I can do in XCode?
Could you help me?
Best regards,
H.R.
It's easy to call python code using NSTask in swift for macOS projects. I am not sure in iOS, you can try using PythonKit
#if canImport(PythonKit)
import PythonKit
#else
import Python
#endif
print(Python.version)
By default, when you import Python, Swift searches system library paths for the newest version of Python installed. To use a specific Python installation, set the PYTHON_LIBRARY environment variable to the libpython shared library provided by the installation.
You can use like below-
let pythonInt: PythonObject = 1
In Swift, PythonObject represents an object from Python. All Python APIs use and return PythonObject instances.
Basic types in Swift (like numbers and arrays) are convertible to PythonObject. In some cases (for literals and functions taking PythonConvertible arguments), conversion happens implicitly. To explicitly cast a Swift value to PythonObject, use the PythonObject initializer.

How to package SC instrument for beta testers?

I've built a sample instrument using the following architecture:
A python script reads sample files from a Redis database stored on disk and sends OSC messages to super collider with the path and pitch of a random selection of N samples. On the SC side, the key presses form a midi interface are mapped to select and play one or more of the corresponding samples.
The prototype is functional, and I would like to release a beta for testers, however I have no clue on how to package it. One option that seems plausible is wrap it as a VST, but as far as I understand, there is no stable wrapper for SC and the safest bet would be to re-code the entire instrument into VST.
Another option, that seems more viable, would be to wrap it as a standalone instrument. Would I need to have the beta testers have SC installed, or is there a way to wrap a SC server inside an executable?
Any ideas on this issue--even if they divert from my original approach-- will be highly appreciated.
Fortunately there are lots of options for this in SuperCollider. You may want to start by reviewing this article from the documentation, in which Making Standalone Applications is discussed rather thoroughly.
Alternately, there are some pre-built standalones floating around, frequently on GitHub. I frequently use this repository to package up an installation or instrument and deploy on Raspberry Pi.
I'm not super familiar with VST or Supercollider, but maybe you could try something like Docker. This is a container based solution which might meet your needs
You set up a Dockerfile, which lets you provide instructions to build a container with the SC Server. Then let the person using it decide whether they want a Redis instance inside the same docker container, or if they want to use a separate Redis Container.

(ArcGIS) Create new function for Arcpy

As a result for my graduation paper I am trying to create a new toolbox for ArcGIS using Python scripting. The problem is I am stuck with my code because none of the existing functions in Arcpy does what I need to do. So my question is, is it possible to create a new function in Arcpy or this is restricted to ESRI developers?
Another way to solve this problem would be implement some changes in the tool Cost Distance from Spatial Analyst. So my other question is, do I have access to the coding of the native tools from ArcGIS? And if I have, can I change it to achieve my goal? Or this is also restricted?
Thanks,
Gabriel
You can create your own functions using Python and the Python arcpy site-package. All of ESRI's tools are proprietary, and therefore, most have restricted access. You can check to see if you can edit the tools in the ArcToolbox. For example, you can see the Cost Distance tool is restricted:
While the Spline with Barriers tool can be edited by right-clicking on the script tool.
You can create your own python toolbox for ArcPy following this help: http://resources.arcgis.com/en/help/main/10.2/index.html#//001500000022000000
also checkout the environment variables for your existing tool, it might have some options that you are looking for.
I know this is a year late, but I would like to add a couple ideas to what has been posted for folks like me who are searching for python toolbox help.
For educational purposes, begin by creating a model in Model Builder. This is one way to use ESRI's proprietary tools in new ways. Decide what you want to do and look at ESRI's presence on GitHub. The developers there have a lot of open-source tools ready to use in ArcMap. Here is one such repository: GeospatialPython
Side note, contributing to a repository is a great resume builder.
After creating your working model builder, right click on it in ArcCatalog and select 'export as Python script'. Open the script in your favorite IDE and begin cleaning it up!
Now that you have a python script, it is ready to become a python toolbox. Use gDexter42's link and get to work on that.
My team has some interesting uses for python toolboxes and I am currently creating my very first one.
We use a runner scripts to debug our software. (hard-coded parameters)
We use inheritance for functions that we use over and over again (class BaseToolboxMixin(object):) Stack Exchange Article on Mixins
Most importantly, we have created our own python module around the tool.
The .pyt file we made simply imports arcpy, the module we created, executes the module from a list we created in our 'toolbox_loader.py' file, and has a class that calls the init file that created the module in the first place. >20 lines of code.
As our team creates more tools for the module/python toolbox, we will add them to the list. They will appear inside our toolbox alongside all the ESRI tools. "Seamless integration" was thrown around a lot at the Dev Summit this year.
ESRI is encouraging creativity and open-source usage (check out esri leaflet). I wouldn't constrain my thinking because ESRI's tools are proprietary.
All of this functionality began as a model in ArcMap. Not everyone is going to need to create their own module - complete overkill for most tasks - but it is good to know that the ceiling for Python functionality is high. I am not an experienced developer, but I was able to go from nothing to a functional python toolbox in about 25 man-hours of work. Someone who knows their stuff could do it in a morning.

Can Gstreamer be used server-side to stream audio to multiple clients on demand?

I'm working on an audio mixing program (DAW) web app, and considering using Python and Python Gstreamer for the backend. I understand that I can contain the audio tracks of a single music project in a gst.Pipeline bin, but playback also appears to be controlled by this Pipeline.
Is it possible to create several "views" into the Pipeline representing the project? So that more than one client can grab an audio stream of this Pipeline at will, with the ability to do time seek?
If there is a better platform/library out there to use, I'd appreciate advice on that too. I'd prefer sticking to Python though, because my team members are already researching Python for other parts of this project.
Thanks very much!
You might want to look at Flumotion (www.flumotion.org). It is a python based streaming server using GStreamer, you might be able to get implementation ideas from that in terms of how you do your application. It relies heavily on the python library Twisted for its network handling.

Categories

Resources