How to run python script in Android studio using chaquopy? - python

I am trying to use python script in the Android studio using chaquopy. But I have 2 problems.
I am not able to import python random.
How to retrieve a list.
Here is the Python Script.
import random
def getPlayers(wk, batsman, bowler, allRounder):
chosen = batsman[:3] + bowler[:3] + allRounder[:1] + wk[:1]
remainder = batsman[3:] + bowler[3:] + allRounder[1:] + wk[1:]
random.shuffle(remainder)
chosen.extend(remainder[:3])
players = {'Batsman': [x for x in chosen if x in batsman],
'Bowler': [x for x in chosen if x in bowler],
'AllRounder': [x for x in chosen if x in allRounder],
'Wk': [x for x in chosen if x in wk]}
for key in players:
for name in players[key]:
return f'{key}: {name}'
It is showing no module named random found.
And The activity file
private String getTeam() {
Python python = Python.getInstance();
PyObject file = python.getModule("getTeam");
return file.callAttr("getPlayers", wk, batsman, bowler, ar);
}
I am calling getTeam method in onCreate. So, How can I get the list of keys and values from python script?
EDIT
I have used this code to access the data but It is showing com.chaquo.python.PyException: TypeError: jarray does not support slice syntax.
Here is the Code
String[] team = getTeam();
for (String s : team) {
Log.d("Player", s);
}
}
private String[] getTeam() {
Python python = Python.getInstance();
PyObject file = python.getModule("getTeam");
return file.callAttr("getPlayers", wk.toArray(), batsman.toArray(), bowler.toArray(), ar.toArray()).toJava(String[].class);
}

It is showing no module named random found.
I assume you're talking about an error shown in the Android Studio editor, rather than one which happened at runtime. As the documentation says, errors in the editor are harmless: just go ahead and run your app, and if there really is an error, the details will be displayed in the Logcat.
How can I get the list of keys and values from python script?
You can write the Python code like this:
return [f'{key}: {name}' for key in players for name in players[key]]
And the Java code like this:
file.callAttr("getPlayers", wk, batsman, bowler, ar).toJava(String[].class)
That will give you a Java String[] array, which you can then process however you want.
EDIT for jarray does not support slice syntax:
This issue is fixed in Chaquopy 9.0.0. With older versions, you can work around it by converting the array to a Python list.
To convert in Python:
def getPlayers(wk, batsman, bowler, allRounder):
wk = list(wk)
batsman = list(batsman)
# etc.
Or to convert in Java:
PyObject list = python.getBuiltins().get("list");
return file.callAttr("getPlayers",
list.call(wk.toArray()),
list.call(batsman.toArray()),
// etc.

Related

C.GoString() only returning first character

I'm trying to call a Go function from Python using c-shared (.so) file. In my python code I'm calling the function like this:
website = "https://draftss.com"
domain = "draftss.com"
website_ip = "23.xxx.xxx.xxx"
website_tech_finder_lib = cdll.LoadLibrary("website_tech_finder/builds/websiteTechFinder.so")
result_json_string: str = website_tech_finder_lib.FetchAllData(website, domain, website_ip)
On Go side I'm converting the strings to Go strings based on this SO post (out of memory panic while accessing a function from a shared library):
func FetchAllData(w *C.char, d *C.char, dIP *C.char) *C.char {
var website string = C.GoString(w)
var domain string = C.GoString(d)
var domainIP string = C.GoString(dIP)
fmt.Println(website)
fmt.Println(domain)
fmt.Println(domainIP)
.... // Rest of the code
}
The website domain and domainIP just have the first characters of the strings that I passed:
fmt.Println(website) // -> h
fmt.Println(domain) // -> d
fmt.Println(domainIP) // -> 2
I'm a bit new to Go, so I'm not sure if I'm doing something stupid here. How do I get the full string that I passed?
You need to convert the parameters as UTF8 bytes.
website = "https://draftss.com".encode('utf-8')
domain = "draftss.com".encode('utf-8')
website_ip = "23.xxx.xxx.xxx".encode('utf-8')
lib = cdll.LoadLibrary("website_tech_finder/builds/websiteTechFinder.so")
result_json_string: str = website_tech_finder_lib.FetchAllData(website, domain, website_ip)

Mel to Python difficulty

I have been following tutorials on Digital Tutors for scripting, and in some videos the tutor makes a tool that adds gamma correct nodes to any selected shader using MEL, for my learning I thought I'd try to rewrite the code in Python but I am struggling to convert a piece of MEL code to Python.
The code I have so far is this:
import maya.cmds as cmds
selMat = cmds.ls(sl=True, mat=True)
if len(selMat) < 1:
cmds.warning('Select at least one Maya or Mental Ray Shader to apply gamma correct node to.')
for mat in selMat:
gammaCorrect_util = cmds.shadingNode('gammaCorrect', asUtility=True)
rename_gamma = cmds.rename(gammaCorrect_util, ('gamma_' + mat))
cmds.setAttr((rename_gamma + '.gammaX'), 0.45)
cmds.setAttr((rename_gamma + '.gammaY'), 0.45)
cmds.setAttr((rename_gamma + '.gammaZ'), 0.45)
if cmds.attributeQuery('color', mat): # << error here
connection_to_mat = cmds.listConnections(mat + '.color')
if len(connection_to_mat) == 1:
cmds.connectAttr ((connection_to_mat + '.outColor'), (rename_gamma + '.value'), f=True)
cmds.connectAttr ((rename_gamma + '.outValue'), (mat + '.color'), f=True)
when I run this I get the following error:
Error: Too many objects or values.Traceback (most recent call last): File "", line 17, in TypeError: Too many objects or values.
The MEL code where I think the issue is is:
if(`attributeExists "color" $mat`){
string $connection_to_mat[] = `listConnections($mat + ".color")`;
if(size($connection_to_mat) == 1){
connectAttr -f ($connection_to_mat[0] + ".outColor") ($rename_gamma + ".value");
connectAttr -f ($rename_gamma + ".outValue") ($mat + ".color");
I'm not sure how to convert and use the "attributeQuery" command in python in place of "attributeExists" in MEL, The tutor also defines the preseeding varables "$connection_to_mat[]" but this doesnt work in Python.
attributeQuery only takes one unnamed argument, the attribute. You have to specify the node with the node flag, same as the MEL version.
cmds.attributeQuery('color', n=mat, exists=True)
listConnections returns an array. You'll need to check there are some connections and if so use the first connection: connection_to_mat[0]
Incidentally, if you specify that you want the plug, then you won't have to concatenate the string with ".outColor"
cmds.listConnections(mat + '.color', p=True)
// result ["someNode.outColor"]
This is better because there's a possibility the incoming attribute is named differently, or is a child of a compound. Example: someNode.colors.outColor1. Whatever it is, you can just feed it to connectAttr.

spark mapPartitionRDD can't print values

I am following the Machine Learning with Spark Book and trying to convert the python code to scala code and using Beaker notebook to share variables in order to pass values to python to plot with matplotlib as described in the book. Most of the code so far I have been able to convert but I am having some issues with the try-catch conversion with data cleansing with the u.item dataset. Below code ends in a infinite loop without a clear issue what the error is.
val movieData = sc.textFile("/Users/minHenry/workspace/ml-100k/u.item")
val movieDataSplit = movieData.first()
val numMovies = movieData.count()
def convertYear(x:String):Int = x.takeRight(4) match {
case x => x.takeRight(4).toInt
case _ => 1900
}
val movieFields = movieData.map(lines => lines.split('|'))
print(movieData.first())
val years1 = movieFields.map(fields => fields(2))
val years = movieFields.map(fields => fields(2).map(x=>convertYear(x.toString())))
val filteredYears = years.filter(x => x!=1900)
years.take(2).foreach(println)
I suspect my problem is with my pattern match but I am not exactly sure what's wrong with it. I think the takeRight() works because it doesn't complain about the type that this function is being applied to.
UPDATE
I have updated the code as follows, per advice from the answer provided thus far:
import scala.util.Try
val movieData = sc.textFile("/Users/minHenry/workspace/ml-100k/u.item")
def convertYear(x:String):Int = Try(x.toInt).getOrElse(1900)
val movieFields = movieData.map(lines => lines.split('|'))
val preYears = movieFields.map(fields => fields(2))
val years = preYears.map(x => x.takeRight(4))//.map(x=>convertYear(x))
println("=======> years")
years.take(2).foreach(println) //--output = 1995/n1995
println("=======> filteredYears")
val filteredYears = years.filter(x => x!=1900)
filteredYears.take(2).foreach(println)
//val movieAges = filteredYears.map(yr => (1998-yr)).countByValue()
I commented out the map following the takeRight(4) because its easier to comment than putting x=>convertYear(x.takeRight(4)) and should produce the same output. When I apply this convertYear() function i still end up in an infinite loop. the values print as expected in the few print statements shown. The problem is if i cannot remove the data point that cannot be easily converted to Int then I am unable to run the countByValue() function in the last line.
Here is the link to my public beaker notebook for more context:
https://pub.beakernotebook.com/#/publications/56eed31d-85ad-4728-a45d-14b3b08d673f
movieData: RDD[String]
movieFields: RDD[Array[String]]
years1: RDD[String]
val years = movieFields.map(fields => fields(2).map(x=>convertYear(x.toString()))) - fields(2) is String and so x is Char, because String is treated as Seq[Char]. All inputs to convertYear(x: String) have only one letter string.
Your error is types incompatability hiding (convertYear(x.toString())). It's alarm bell. Always use type system in scala, don't hide problem with toString() or isInstanceOf or something else. Then compiler shows error before running.
P.S.
Second call of takeRight is useless.
def convertYear(x:String):Int = x.takeRight(4) match {
case x => x.takeRight(4).toInt
case _ => 1900
}
Pattern matching is about checking type or conditions (with if statement). Your first partial function doesn't check anything. All inputs go to x.takeRight(4).toInt. Also there is no defence against toInt exception.
Use instead def convertYear(x: String): Int = Try(x.toInt).getOrElse(1900).
Update
scala> import scala.util.Try
import scala.util.Try
scala> def convertYear(x:String):Int = Try(x.toInt).getOrElse(1900)
convertYear: (x: String)Int
scala> List("sdsdf", "1989", "2009", "1945", "asdf", "455")
res0: List[String] = List(sdsdf, 1989, 2009, 1945, asdf, 455)
scala> res0.map(convertYear)
res1: List[Int] = List(1900, 1989, 2009, 1945, 1900, 455)
With RDD all the same, because it is a functor as List.
val filteredYears = years.filter(x => x!=1900) Wouldn't work as you expect. x is a String not Int. Scala doesn't implicitly convert types for comparision. So you always get true.

F# library or .Net Numerics equivalent to Python Numpy function

I have the following python Numpy function; it is able to take X, an array with an arbitrary number of columns and rows, and output a Y value predicted by a least squares function.
What is the Math.Net equivalent for such a function?
Here is the Python code:
newdataX = np.ones([dataX.shape[0],dataX.shape[1]+1])
newdataX[:,0:dataX.shape[1]]=dataX
# build and save the model
self.model_coefs, residuals, rank, s = np.linalg.lstsq(newdataX, dataY)
I think you are looking for the functions on this page: http://numerics.mathdotnet.com/api/MathNet.Numerics.LinearRegression/MultipleRegression.htm
You have a few options to solve :
Normal Equations : MultipleRegression.NormalEquations(x, y)
QR Decomposition : MultipleRegression.QR(x, y)
SVD : MultipleRegression.SVD(x, y)
Normal equations are faster but less numerically stable while SVD is the most numerically stable but the slowest.
You can call numpy from .NET using pythonnet (C# CODE BELOW IS COPIED FROM GITHUB):
The only "funky" part right now with pythonnet is passing numpy arrays. It is possible to convert them to Python lists at the interface, though this reduces performance for some situations.
https://github.com/pythonnet/pythonnet/tree/develop
static void Main(string[] args)
{
using (Py.GIL()) {
dynamic np = Py.Import("numpy");
dynamic sin = np.sin;
Console.WriteLine(np.cos(np.pi*2));
Console.WriteLine(sin(5));
double c = np.cos(5) + sin(5);
Console.WriteLine(c);
dynamic a = np.array(new List<float> { 1, 2, 3 });
dynamic b = np.array(new List<float> { 6, 5, 4 }, Py.kw("dtype", np.int32));
Console.WriteLine(a.dtype);
Console.WriteLine(b.dtype);
Console.WriteLine(a * b);
Console.ReadKey();
}
}
outputs:
1.0
-0.958924274663
-0.6752620892
float64
int32
[ 6. 10. 12.]
Here is example using F# posted on github:
https://github.com/pythonnet/pythonnet/issues/112
open Python.Runtime
open FSharp.Interop.Dynamic
open System.Collections.Generic
[<EntryPoint>]
let main argv =
//set up for garbage collection?
use gil = Py.GIL()
//-----
//NUMPY
//import numpy
let np = Py.Import("numpy")
//call a numpy function dynamically
let sinResult = np?sin(5)
//make a python list the hard way
let list = new Python.Runtime.PyList()
list.Append( new PyFloat(4.0) )
list.Append( new PyFloat(5.0) )
//run the python list through np.array dynamically
let a = np?array( list )
let sumA = np?sum(a)
//again, but use a keyword to change the type
let b = np?array( list, Py.kw("dtype", np?int32 ) )
let sumAB = np?add(a,b)
let SeqToPyFloat ( aSeq : float seq ) =
let list = new Python.Runtime.PyList()
aSeq |> Seq.iter( fun x -> list.Append( new PyFloat(x)))
list
//Worth making some convenience functions (see below for why)
let a2 = np?array( [|1.0;2.0;3.0|] |> SeqToPyFloat )
//--------------------
//Problematic cases: these run but don't give good results
//make a np.array from a generic list
let list2 = [|1;2;3|] |> ResizeArray
let c = np?array( list2 )
printfn "%A" c //gives type not value in debugger
//make a np.array from an array
let d = np?array( [|1;2;3|] )
printfn "%A" d //gives type not value in debugger
//use a np.array in a function
let sumD = np?sum(d) //gives type not value in debugger
//let sumCD = np?add(d,d) // this will crash
//can't use primitive f# operators on the np.arrays without throwing an exception; seems
//to work in c# https://github.com/tonyroberts/pythonnet //develop branch
//let e = d + 1
//-----
//NLTK
//import nltk
let nltk = Py.Import("nltk")
let sentence = "I am happy"
let tokens = nltk?word_tokenize(sentence)
let tags = nltk?pos_tag(tokens)
let taggedWords = nltk?corpus?brown?tagged_words()
let taggedWordsNews = nltk?corpus?brown?tagged_words(Py.kw("categories", "news") )
printfn "%A" taggedWordsNews
let tlp = nltk?sem?logic?LogicParser(Py.kw("type_check",true))
let parsed = tlp?parse("walk(angus)")
printfn "%A" parsed?argument
0 // return an integer exit code

Parsing a C file in python for analyses

From a C file I'd like to parse switch parts to be able to identify 2 things :
a switch with only 2 cases :
switch(Data)
{
case 0: value = 10 ; break;
case 1 : value = 20 ;break;
default:
somevar = false;
value = 0 ;
----
break;
}
==> for instance would print "section with 2 case"
a switch with many (unlimited) cases :
switch(Data)
{
case Constant1 : value = 10 ; break;
case constant2 : value = 20 ;break;
case constant3 : value = 30 ;break;
case constant4 : value = 40 ;break;
default:
somevar = false;
value = 0 ;
----
break;
}
==> would print "section with case : Constant1, Constant2, Constant3, Constant4"
To do that, I've done the following :
original_file = open(original_file,"r")
for line in original_file:
line_nb +=1
regex_case = re.compile('.*case.*:')
found_case = regex_case.search(line)
if found_case :
cases_dict[line_nb]=found_case.group() # rely on line nb is somewhat not reliable as the c file may have break on an additional line
bool_or_enum(cases_dict)
what would need the bool_or_enum to test all the required results:
def bool_or_enum(in_dict={}):
sorted_dict = sorted(in_dict.items(), key=operator.itemgetter(0))
for index, item in enumerate(sorted_dict):
According to comments I've searching and found that 2 solutions can be provided :
by using pycparser
pros: is a python package, free and opensource
cons : not really easy to start with, need addtional tools to preprocess files (gcc, llvm, etc) .
by using an external tool : understand from scitools
This tool is usable with its GUI to build a complete project to parse so you can have call graph, metrics, code checking, etc. . For this question I've been using the API which is available as docs and examples
pros:
the parsing is totally done by the tool from the GUI
I have many source files, re-parsing from a complete directory is a "push-button" solution
cons :
not free
not open sourced
I usually prefer open source project but in that case Understand is the unique solution. By the way the license is not so expensive and over all : I've chosen this because I could parse some files that couldn't be compiled because their dependencies (libs and header files) couldn't be available.
Here is the code I've used :
import understand
understand_file="C:\\Users\\dlewin\\myproject.udb"
#Create a list with all the cases from the file
def find_cases(file):
returnList = []
for lexeme in file.lexer(False,8,False,True):
if ( lexeme.text() == "case" ) :
returnList.append(lexeme.line_end()) #line nb
returnList.append(lexeme.text()) #found a case
return returnList
def find_identifiers(file):
returnList = []
# Open the file lexer with macros expanded and inactive code removed
for lexeme in file.lexer(False,8,False,True):
if(lexeme.token() == "Identifier"):
returnList.append(lexeme.line_end()) #line nb
returnList.append(lexeme.text()) #identifier found
return returnList
db = understand.open(understand_file ) # Open Database
file = db.lookup("mysourcefile.cpp","file")[0]
print (file.longname())
liste_idents = find_identifiers(file)
liste_cases = find_cases(file)

Categories

Resources