How to start a python program using java (Runtime.getRuntime().exec()) - python

I am writing a program that requires the starting of a python script before the rest of the java code runs. However, I cannot find a solution to my issues. I would appreciate if someone could suggest a solution to the problem I am facing.
Code (I need help on the part under the comment "start python"):
import java.io.IOException;
//makes it easier for user to
//select game/start python
public class gameselect {
public static void main(String args[]) throws IOException {
//start python
try {
String cmd = "python ngramcount.py";
Process process = Runtime.getRuntime().exec(cmd);
process.getInputStream();
}
catch (IOException e) {
e.printStackTrace();
}
//select game
try {
Scanner in = new Scanner (System.in);
game1 g = new game1();
game2 f = new game2();
int choice = 0;
System.out.println("Welcome to TranslateGame!");
System.out.println("Type 1 for game1 (words) or 2 for game2 (phrases)");
while (choice != 1 && choice != 2) {
choice = in.nextInt();
if (choice != 1 && choice != 2) {
System.out.println("No game associated with that number.");
}
}
if (choice == 1) {
g.game1();
}
else if (choice == 2) {
f.game2();
}
}
catch(IOException e) {
System.out.println("No.");
}
}
}

Here is some code that you might be able to get to work. I also commented it and provided some reference links to help you understand what the code is doing.
public static void main(String[] args) throws IOException {
// I'm using the absolute path for my example.
String fileName = "C:\\Users\\yourname\\Desktop\\testing.py";
// Creates a ProcessBuilder
// doc: https://docs.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.html
ProcessBuilder pb = new ProcessBuilder("python", fileName);
pb.redirectErrorStream(true); // redirect error stream to a standard output stream
Process process = pb.start(); // Used to start the process
// Reads the output stream of the process.
BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()));
String line; // this will be used to read the output line by line. Helpful in troubleshooting.
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
}

Related

Unity - extracting camera pixel array is incredibly slow

I'm using the below lines of code to extract the pixel array from a camera at every frame, saving it as jpg and then running a python process on the jpg. Although it works, it is incredibly slow. The bottleneck seems to be reading the pixels in unity. The python process itself only lasts 0.02 seconds.
Can anyone suggest relatively easy ways I can speed up this process?
I've seen this , but it's too high-level for me to understand how to adapt it to my use-case.
public override void Initialize()
{
renderTexture = new RenderTexture(84, 84, 24);
rawByteData = new byte[84 * 84 * bytesPerPixel];
texture2D = new Texture2D(84, 84, TextureFormat.RGB24, false);
rect = new Rect(0, 0, 84, 84);
cam.targetTexture = renderTexture;
}
private List<float> run_cmd()
{
// Setup a camera, texture and render texture
cam.targetTexture = renderTexture;
cam.Render();
// Read pixels to texture
RenderTexture.active = renderTexture;
texture2D.ReadPixels(rect, 0, 0);
rawByteData = ImageConversion.EncodeToJPG(texture2D);
// Assign random temporary filename to jpg
string fileName = "/media/home/tmp/" + Guid.NewGuid().ToString() + ".jpg";
File.WriteAllBytes(fileName, rawByteData); // Requires System.IO
// Start Python process
ProcessStartInfo start = new ProcessStartInfo();
start.FileName = "/media/home/path/to/python/exe";
start.Arguments = string.Format(
"/media/home/path/to/pythonfile.py photo {0}", fileName);
start.UseShellExecute = false;
start.RedirectStandardOutput = true;
start.RedirectStandardError = true;
string stdout;
using(Process process = Process.Start(start))
{
using(StreamReader reader = process.StandardOutput)
{
stdout = reader.ReadToEnd();
}
}
string[] tokens = stdout.Split(',');
List<float> result = tokens.Select(x => float.Parse(x)).ToList();
System.IO.File.Delete(fileName);
return result;
}
In Unity 2018.1 was added a new system for async read data from GPU - https://docs.unity3d.com/ScriptReference/Rendering.AsyncGPUReadback.html, and in this not necessary use camera for getting array of textures, I wrote a simple example which good work and pretty fast:
using System.Collections;
using System.IO;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.Experimental.Rendering;
using UnityEngine.Rendering;
public class CameraTextureTest : MonoBehaviour
{
private GraphicsFormat format;
private int frameIndex = 0;
IEnumerator Start()
{
yield return new WaitForSeconds(1);
while (true)
{
yield return new WaitForSeconds(0.032f);
yield return new WaitForEndOfFrame();
var rt = RenderTexture.GetTemporary(Screen.width, Screen.height, 32);
format = rt.graphicsFormat;
ScreenCapture.CaptureScreenshotIntoRenderTexture(rt);
AsyncGPUReadback.Request(rt, 0, TextureFormat.RGBA32, OnCompleteReadback);
RenderTexture.ReleaseTemporary(rt);
}
}
void OnCompleteReadback(AsyncGPUReadbackRequest request)
{
if (request.hasError)
{
Debug.Log("GPU readback error detected.");
return;
}
byte[] array = request.GetData<byte>().ToArray();
Task.Run(() =>
{
File.WriteAllBytes($"D:/Screenshots/Screenshot{frameIndex}.png",ImageConversion.EncodeArrayToPNG(array, format, (uint) Screen.width, (uint) Screen.height));
});
frameIndex++;
}
}
You can adapt this example to your task.

Use Tensorflow model in android

I have a Tensorflow model and I have converted it to ".tflite" but I don't know the way how to implement it on android. I followed the TensorFlow guidelines to implement it in android but since there is no XML code given the TensorFlow website I am struggling to connect it with the front end (XML). I need a clear explanation of how to use my model in android studio using java.
I followed the official instructions given in the TensorFlow website to implement the model in android.
A sample code of how to implement object detection based on tflite model from Tensorflow. I suppose these kinds of answers are not the best answers, but I happened to have a simple example of your exact problem.
Note: it does detect objects and outputs their labels into standard output using Log.d. No boxes or labels will be drawn around detected images.
Download started models and labels from here. Put them into the assets folder of your project.
Java
import android.content.pm.PackageManager;
import android.media.Image;
import android.os.Bundle;
import android.util.Log;
import android.widget.Toast;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import androidx.camera.core.Camera;
import androidx.camera.core.CameraSelector;
import androidx.camera.core.ExperimentalGetImage;
import androidx.camera.core.ImageAnalysis;
import androidx.camera.core.ImageProxy;
import androidx.camera.core.Preview;
import androidx.camera.lifecycle.ProcessCameraProvider;
import androidx.camera.view.PreviewView;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.mlkit.common.model.LocalModel;
import com.google.mlkit.vision.common.InputImage;
import com.google.mlkit.vision.objects.DetectedObject;
import com.google.mlkit.vision.objects.ObjectDetection;
import com.google.mlkit.vision.objects.ObjectDetector;
import com.google.mlkit.vision.objects.custom.CustomObjectDetectorOptions;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.NoSuchElementException;
import java.util.concurrent.ExecutionException;
public class ActivityExample extends AppCompatActivity {
private ListenableFuture<ProcessCameraProvider> cameraProviderFuture;
private ObjectDetector objectDetector;
private PreviewView prevView;
private List<String> labels;
private int REQUEST_CODE_PERMISSIONS = 101;
private String[] REQUIRED_PERMISSIONS =
new String[]{"android.permission.CAMERA"};
#Override
protected void onCreate(#Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_fullscreen);
prevView = findViewById(R.id.viewFinder);
prepareObjectDetector();
prepareLabels();
if (allPermissionsGranted()) {
startCamera();
} else {
ActivityCompat.requestPermissions(this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS);
}
}
private void prepareLabels() {
try {
InputStreamReader reader = new InputStreamReader(getAssets().open("labels_mobilenet_quant_v1_224.txt"));
labels = readLines(reader);
} catch (IOException e) {
e.printStackTrace();
}
}
private List<String> readLines(InputStreamReader reader) {
BufferedReader bufferedReader = new BufferedReader(reader, 8 * 1024);
Iterator<String> iterator = new LinesSequence(bufferedReader);
ArrayList<String> list = new ArrayList<>();
while (iterator.hasNext()) {
list.add(iterator.next());
}
return list;
}
private void prepareObjectDetector() {
CustomObjectDetectorOptions options = new CustomObjectDetectorOptions.Builder(loadModel("mobilenet_v1_1.0_224_quant.tflite"))
.setDetectorMode(CustomObjectDetectorOptions.SINGLE_IMAGE_MODE)
.enableMultipleObjects()
.enableClassification()
.setClassificationConfidenceThreshold(0.5f)
.setMaxPerObjectLabelCount(3)
.build();
objectDetector = ObjectDetection.getClient(options);
}
private LocalModel loadModel(String assetFileName) {
return new LocalModel.Builder()
.setAssetFilePath(assetFileName)
.build();
}
private void startCamera() {
cameraProviderFuture = ProcessCameraProvider.getInstance(this);
cameraProviderFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = cameraProviderFuture.get();
bindPreview(cameraProvider);
} catch (ExecutionException e) {
// No errors need to be handled for this Future.
// This should never be reached.
} catch (InterruptedException e) {
}
}, ContextCompat.getMainExecutor(this));
}
private void bindPreview(ProcessCameraProvider cameraProvider) {
Preview preview = new Preview.Builder().build();
CameraSelector cameraSelector = new CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build();
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
YourAnalyzer yourAnalyzer = new YourAnalyzer();
yourAnalyzer.setObjectDetector(objectDetector, labels);
imageAnalysis.setAnalyzer(
ContextCompat.getMainExecutor(this),
yourAnalyzer);
Camera camera =
cameraProvider.bindToLifecycle(
this,
cameraSelector,
preview,
imageAnalysis
);
preview.setSurfaceProvider(prevView.createSurfaceProvider(camera.getCameraInfo()));
}
private Boolean allPermissionsGranted() {
for (String permission : REQUIRED_PERMISSIONS) {
if (ContextCompat.checkSelfPermission(
this,
permission
) != PackageManager.PERMISSION_GRANTED
) {
return false;
}
}
return true;
}
#Override
public void onRequestPermissionsResult(int requestCode, #NonNull String[] permissions, #NonNull int[] grantResults) {
if (requestCode == REQUEST_CODE_PERMISSIONS) {
if (allPermissionsGranted()) {
startCamera();
} else {
Toast.makeText(this, "Permissions not granted by the user.", Toast.LENGTH_SHORT)
.show();
finish();
}
}
}
private static class YourAnalyzer implements ImageAnalysis.Analyzer {
private ObjectDetector objectDetector;
private List<String> labels;
public void setObjectDetector(ObjectDetector objectDetector, List<String> labels) {
this.objectDetector = objectDetector;
this.labels = labels;
}
#Override
#ExperimentalGetImage
public void analyze(#NonNull ImageProxy imageProxy) {
Image mediaImage = imageProxy.getImage();
if (mediaImage != null) {
InputImage image = InputImage.fromMediaImage(
mediaImage,
imageProxy.getImageInfo().getRotationDegrees()
);
objectDetector
.process(image)
.addOnFailureListener(e -> imageProxy.close())
.addOnSuccessListener(detectedObjects -> {
// list of detectedObjects has all the information you need
StringBuilder builder = new StringBuilder();
for (DetectedObject detectedObject : detectedObjects) {
for (DetectedObject.Label label : detectedObject.getLabels()) {
builder.append(labels.get(label.getIndex()));
builder.append("\n");
}
}
Log.d("OBJECTS DETECTED", builder.toString().trim());
imageProxy.close();
});
}
}
}
static class LinesSequence implements Iterator<String> {
private BufferedReader reader;
private String nextValue;
private Boolean done = false;
public LinesSequence(BufferedReader reader) {
this.reader = reader;
}
#Override
public boolean hasNext() {
if (nextValue == null && !done) {
try {
nextValue = reader.readLine();
} catch (IOException e) {
e.printStackTrace();
nextValue = null;
}
if (nextValue == null) done = true;
}
return nextValue != null;
}
#Override
public String next() {
if (!hasNext()) {
throw new NoSuchElementException();
}
String answer = nextValue;
nextValue = null;
return answer;
}
}
}
XML layout
<?xml version="1.0" encoding="utf-8"?>
<androidx.camera.view.PreviewView
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="#+id/viewFinder"
android:layout_width="match_parent"
android:layout_height="match_parent" />
Gradle file configuration
android {
...
aaptOptions {
noCompress "tflite" // Your model\'s file extension: "tflite", "lite", etc.
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
dependencies {
...
implementation 'com.google.mlkit:object-detection-custom:16.0.0'
def camerax_version = "1.0.0-beta03"
// CameraX core library using camera2 implementation
implementation "androidx.camera:camera-camera2:$camerax_version"
// CameraX Lifecycle Library
implementation "androidx.camera:camera-lifecycle:$camerax_version"
// CameraX View class
implementation "androidx.camera:camera-view:1.0.0-alpha10"
}

For Hazelcast python client, how do i do Hazelcast set intersection between multiple Hazelcast set entities without retain_all() on server-side?

I have multiple Hazelcast sets for which I want to find the Intersection, however I want to avoid pulling any data on the client side. My current approach is exactly that with this code. It finds intersection between the 1st set and the list of the rest of set so that set1 is now the intersection of all.
for i in range(1, len(sets)):
cur = sets[i]
set1.retain_all(cur.get_all())
Hazelcast's retain_all doesn't work with 2 set entities, only with a set and a collection which is not what I am looking for. For example, it can be done with Redis with this code, so I want its Hazelcast equivalent.
set_result = "set_result"
redisClient.sinterstore(set_result, *list(sets))
Any help would be appreciated!
Since Hazelcast's ISet is a Set which is a Collection the following code should work:
set1.retainAll(cur);
But, it doesn't seem like you'd like set1 to be modified but would rather store the result in a different set much like redis' sinterstore function.
The following is an example of an alternative implementation:
public class RetainAllExample {
public static void main(String[] args) {
HazelcastInstance h1 = Hazelcast.newHazelcastInstance();
HazelcastInstance h2 = Hazelcast.newHazelcastInstance();
Set<String> set1 = h1.getSet("set1");
Set<String> set2 = h1.getSet("set2");
set1.add("a");
set1.add("b");
set1.add("c");
set1.add("d");
set2.add("c");
set2.add("d");
set2.add("e");
String resultName = "result";
String[] setNames = new String[] { "set1", "set2"};
RetainAll retainAll = new RetainAll(resultName, setNames;
IExecutorService exec = h1.getExecutorService("HZ-Executor-1");
Future<Boolean> task = exec.submit(retainAll);
try {
if(task.get(1_000, TimeUnit.MILLISECONDS)) {
Set<String> result = h1.getSet(resultName);
result.forEach(str -> System.out.println(str + ", "));
}
} catch (Exception e) {
e.printStackTrace();
System.exit(-1);
}
System.exit(0);
}
static class RetainAll implements Callable<Boolean>, HazelcastInstanceAware, Serializable {
private HazelcastInstance hazelcastInstance;
private String resultSetName;
private String[] setNames;
public RetainAll(String resultSetName, String[] setNames) {
this.resultSetName = resultSetName;
this.setNames = setNames;
}
#Override
public Boolean call() {
try {
Set[] sets = new Set[setNames.length];
IntStream.range(0, setNames.length).forEach(i -> sets[i] = hazelcastInstance.getSet(setNames[i]));
ISet resultSet = hazelcastInstance.getSet(resultSetName);
resultSet.addAll(sets[0]);
IntStream.range(1, sets.length).forEach(i -> resultSet.retainAll(sets[i]));
}
catch (Exception e) {
e.printStackTrace();
return false;
}
return true;
}
#Override
public void setHazelcastInstance(HazelcastInstance hazelcastInstance) {
this.hazelcastInstance = hazelcastInstance;
}
}
}

Translate the code from Python to C++

Now when I understand how the code works, I would like to translate it to C++.
The original Python code:
def recv_all_until(s, crlf):
data = ""
while data[-len(crlf):] != crlf:
data += s.recv(1)
return data
Here's what I tried:
std::string recv_all_until(int socket, std::string crlf)
{
std::string data = "";
char buffer[1];
memset(buffer, 0, 1);
while(data.substr(data.length()-2, data.length()) != crlf)
{
if ((recv(socket, buffer, 1, 0)) == 0)
{
if (errno != 0)
{
close(socket);
perror("recv");
exit(1);
}
}
data = data + std::string(buffer);
memset(buffer, 0, 1);
}
return data;
}
But it shows:
terminate called after throwing an instance of 'std::out_of_range'
what(): basic_string::substr
I understand that the problem is inside the while loop since at first the data string is empty. So how to improve this to make it work the same as it works in Python? Thank you.
You have the problem in the first iteration of your while loop:
Since the data is an empty string, data.length() is equal to 0, and thus you're calling data.substr(-2, 0).
To fix this, you need to add a check for the line length to the while statement.
Also, there's a method of finding such mistakes faster than writing a stackoverflow question about it. Consider reading this article.
If we first change your Python code a bit:
def recv_all_until(s, crlf):
data = ""
while not data.endswith(crlf):
data += s.recv(1)
return data
What we need to do in C++ becomes much clearer:
bool ends_with(const std::string& str, const std::string& suffix)
{
return str.size() >= suffix.size() &&
std::equal(suffix.rbegin(), suffix.rend(), str.rbegin());
}
std::string recv_all_until(int socket, const std::string& crlf)
{
std::string data = "";
char buffer[1];
memset(buffer, 0, 1);
while (!ends_with(data, crlf))
{
if ((recv(socket, buffer, 1, 0)) == 0)
{
if (errno != 0)
{
close(socket);
perror("recv");
exit(1);
}
}
data = data + std::string(buffer);
memset(buffer, 0, 1);
}
return data;
}

How to detect and convert progressive jpegs with python

I'd like to be able to detect progressive jpegs using python and convert them to non progressive.
(I'm writing a tool to manage images for android and progressive jpegs seem to break it.)
I apologise in advance for providing a php based answer, whereas the question was asked about python. Nevertheless, I think it adds value and can be useful.
Before attempting to convert a progressive image to non-progressive, it is good to have a detection method for progressive Jpeg.
Here is the php function that does it, could easily be rewritten in other languages (python would be a candidate), as it reads binary data and Jpeg markers (and therefore does not rely on language specific library)
public function checkProgressiveJPEG($filepath) {
$result = false;
// $this->log = 'started analysis...';
// http://en.wikipedia.org/wiki/Jpeg
// for more details on JPEG structure
// SOI [0xFF, 0xD8] = Start Of Image
// SOF0 [0xFF, 0xC0] = Start Of Frame (Baseline DCT)
// SOF2 [0xFF, 0xC2] = Start Of Frame (Progressive DCT)
// SOS [0xFF, 0xDA] = Start Of Scan
if(file_exists($filepath)) {
$fs = #fopen($filepath, "rb");
$bytecount = 0;
$byte_last = 0;
$buffer = 0;
$buffer_length = 4*1024;
$begins_with_SOI = false;
while($buffer = fread($fs, $buffer_length)) {
// always carry over previous ending byte
// just in case the buffer is read after a 0xFF marker
if($byte_last) {
$buffer = $byte_last.$buffer;
}
$byte_last = 0;
preg_match("/\.$/", $buffer, $matches);
if(count($matches)) {
$byte_last = $matches[0];
}
// check if it begins with SOI marker
if(!$begins_with_SOI) {
preg_match("/^\\xff\\xd8/", $buffer, $matches);
if(count($matches)) {
$begins_with_SOI = true;
} else {
// $this->log = 'does not start with SOI marker';
$result = false;
break;
}
}
// check if SOS or SOF2 is reached
preg_match("/\\xff(\\xda|\\xc2)/", $buffer, $matches);
if(count($matches)) {
if(bin2hex($matches[0]) == 'ffda') {
// $this->log = 'SOS is reached and SOF2 has not been detected, so image is not progressive.';
$result = false;
break;
} else if(bin2hex($matches[0]) == 'ffc2') {
// $this->log = 'SOF2 is reached, so image is progressive.';
$result = true;
break;
}
}
} // end while
fclose($fs);
} // end if
return $result;
}

Categories

Resources