How to execute a python console using Qt on win10? - python

I'm trying to execute a python console using QProcess and show the content of the console in a QTextEdit.
Below is my major codes:
connect(&process, SIGNAL(readyReadStandardOutput()), this, SLOT(PrintStandardOutput()));
connect(&process, SIGNAL(started()), this, SLOT(Started()));
...
void Widget::PrintStandardOutput()
{
QString stdOtuput = QString::fromLocal8Bit(process.readAllStandardOutput());
ui->textEdit_Output->append(stdOtuput);
}
void Widget::Started()
{
ui->stateLabel->setText("Process started...");
}
I used QProcess::start("python") to start a new process and QProcess::write() to write my python codes, but I can see nothing shown in my QTextEdit, by the way, the python process is indeed executed according to the stateLabel showning "Process started...". How to let the python output shown in the QTextEdit? Thanks.

Try :
connect(&process, SIGNAL(readyReadStandardOutput()), this, SLOT(PrintStandardOutput()));
connect(&process, SIGNAL(started()), this, SLOT(Started()));
process.setProcessChannelMode(QProcess::MergedChannels);
process.start(processToStart, arguments)
void Widget::PrintStandardOutput()
{
// Get the output
QString output;
if (process.waitForStarted(-1)) {
while(process.waitForReadyRead(-1)) {
output += process.readAll();
ui->textEdit_Output->append(output);
}
}
process.waitForFinished();
}
void Widget::Started()
{
ui->stateLabel->setText("Process started...");
}
setProcessChannelMode(QProcess::MergedChannels) will merge the output channels. Various programs write to different outputs. Some use the error output for their normal logging, some use the "standard" output, some both. Better to merge them.
readAll() reads everything available so far.
It is put in a loop with waitForReadyRead(-1) (-1 means no timeout), which will block until something is available to read. This is to make sure that everything is actually read.
Simply calling readAll() after the process is finished has turned out to be highly unreliable (buffer might already be empty).

Related

PermissionError for ffmpeg subprocess [duplicate]

I have some code and when it executes, it throws a IOException, saying that
The process cannot access the file 'filename' because it is being used by
another process
What does this mean, and what can I do about it?
What is the cause?
The error message is pretty clear: you're trying to access a file, and it's not accessible because another process (or even the same process) is doing something with it (and it didn't allow any sharing).
Debugging
It may be pretty easy to solve (or pretty hard to understand), depending on your specific scenario. Let's see some.
Your process is the only one to access that file
You're sure the other process is your own process. If you know you open that file in another part of your program, then first of all you have to check that you properly close the file handle after each use. Here is an example of code with this bug:
var stream = new FileStream(path, FileAccess.Read);
var reader = new StreamReader(stream);
// Read data from this file, when I'm done I don't need it any more
File.Delete(path); // IOException: file is in use
Fortunately FileStream implements IDisposable, so it's easy to wrap all your code inside a using statement:
using (var stream = File.Open("myfile.txt", FileMode.Open)) {
// Use stream
}
// Here stream is not accessible and it has been closed (also if
// an exception is thrown and stack unrolled
This pattern will also ensure that the file won't be left open in case of exceptions (it may be the reason the file is in use: something went wrong, and no one closed it; see this post for an example).
If everything seems fine (you're sure you always close every file you open, even in case of exceptions) and you have multiple working threads, then you have two options: rework your code to serialize file access (not always doable and not always wanted) or apply a retry pattern. It's a pretty common pattern for I/O operations: you try to do something and in case of error you wait and try again (did you ask yourself why, for example, Windows Shell takes some time to inform you that a file is in use and cannot be deleted?). In C# it's pretty easy to implement (see also better examples about disk I/O, networking and database access).
private const int NumberOfRetries = 3;
private const int DelayOnRetry = 1000;
for (int i=1; i <= NumberOfRetries; ++i) {
try {
// Do stuff with file
break; // When done we can break loop
}
catch (IOException e) when (i <= NumberOfRetries) {
// You may check error code to filter some exceptions, not every error
// can be recovered.
Thread.Sleep(DelayOnRetry);
}
}
Please note a common error we see very often on StackOverflow:
var stream = File.Open(path, FileOpen.Read);
var content = File.ReadAllText(path);
In this case ReadAllText() will fail because the file is in use (File.Open() in the line before). To open the file beforehand is not only unnecessary but also wrong. The same applies to all File functions that don't return a handle to the file you're working with: File.ReadAllText(), File.WriteAllText(), File.ReadAllLines(), File.WriteAllLines() and others (like File.AppendAllXyz() functions) will all open and close the file by themselves.
Your process is not the only one to access that file
If your process is not the only one to access that file, then interaction can be harder. A retry pattern will help (if the file shouldn't be open by anyone else but it is, then you need a utility like Process Explorer to check who is doing what).
Ways to avoid
When applicable, always use using statements to open files. As said in previous paragraph, it'll actively help you to avoid many common errors (see this post for an example on how not to use it).
If possible, try to decide who owns access to a specific file and centralize access through a few well-known methods. If, for example, you have a data file where your program reads and writes, then you should box all I/O code inside a single class. It'll make debug easier (because you can always put a breakpoint there and see who is doing what) and also it'll be a synchronization point (if required) for multiple access.
Don't forget I/O operations can always fail, a common example is this:
if (File.Exists(path))
File.Delete(path);
If someone deletes the file after File.Exists() but before File.Delete(), then it'll throw an IOException in a place where you may wrongly feel safe.
Whenever it's possible, apply a retry pattern, and if you're using FileSystemWatcher, consider postponing action (because you'll get notified, but an application may still be working exclusively with that file).
Advanced scenarios
It's not always so easy, so you may need to share access with someone else. If, for example, you're reading from the beginning and writing to the end, you have at least two options.
1) share the same FileStream with proper synchronization functions (because it is not thread-safe). See this and this posts for an example.
2) use FileShare enumeration to instruct OS to allow other processes (or other parts of your own process) to access same file concurrently.
using (var stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.Read))
{
}
In this example I showed how to open a file for writing and share for reading; please note that when reading and writing overlaps, it results in undefined or invalid data. It's a situation that must be handled when reading. Also note that this doesn't make access to the stream thread-safe, so this object can't be shared with multiple threads unless access is synchronized somehow (see previous links). Other sharing options are available, and they open up more complex scenarios. Please refer to MSDN for more details.
In general N processes can read from same file all together but only one should write, in a controlled scenario you may even enable concurrent writings but this can't be generalized in few text paragraphs inside this answer.
Is it possible to unlock a file used by another process? It's not always safe and not so easy but yes, it's possible.
Using FileShare fixed my issue of opening file even if it is opened by another process.
using (var stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
}
Problem
one is tying to open file System.IO.File.Open(path, FileMode) with this method and want a shared access on file but
if u read documentation of System.IO.File.Open(path, FileMode) it is explicitly saying its does not allow sharing
Solution
use you have to use other override with FileShare
using FileStream fs = System.IO.File.Open(filePath, FileMode.Open, FileAccess.Read, FileShare.Read);
with FileShare.Read
Had an issue while uploading an image and couldn't delete it and found a solution. gl hf
//C# .NET
var image = Image.FromFile(filePath);
image.Dispose(); // this removes all resources
//later...
File.Delete(filePath); //now works
As other answers in this thread have pointed out, to resolve this error you need to carefully inspect the code, to understand where the file is getting locked.
In my case, I was sending out the file as an email attachment before performing the move operation.
So the file got locked for couple of seconds until SMTP client finished sending the email.
The solution I adopted was to move the file first, and then send the email. This solved the problem for me.
Another possible solution, as pointed out earlier by Hudson, would've been to dispose the object after use.
public static SendEmail()
{
MailMessage mMailMessage = new MailMessage();
//setup other email stuff
if (File.Exists(attachmentPath))
{
Attachment attachment = new Attachment(attachmentPath);
mMailMessage.Attachments.Add(attachment);
attachment.Dispose(); //disposing the Attachment object
}
}
I got this error because I was doing File.Move to a file path without a file name, need to specify the full path in the destination.
The error indicates another process is trying to access the file. Maybe you or someone else has it open while you are attempting to write to it. "Read" or "Copy" usually doesn't cause this, but writing to it or calling delete on it would.
There are some basic things to avoid this, as other answers have mentioned:
In FileStream operations, place it in a using block with a FileShare.ReadWrite mode of access.
For example:
using (FileStream stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
}
Note that FileAccess.ReadWrite is not possible if you use FileMode.Append.
I ran across this issue when I was using an input stream to do a File.SaveAs when the file was in use. In my case I found, I didn't actually need to save it back to the file system at all, so I ended up just removing that, but I probably could've tried creating a FileStream in a using statement with FileAccess.ReadWrite, much like the code above.
Saving your data as a different file and going back to delete the old one when it is found to be no longer in use, then renaming the one that saved successfully to the name of the original one is an option. How you test for the file being in use is accomplished through the
List<Process> lstProcs = ProcessHandler.WhoIsLocking(file);
line in my code below, and could be done in a Windows service, on a loop, if you have a particular file you want to watch and delete regularly when you want to replace it. If you don't always have the same file, a text file or database table could be updated that the service always checks for file names, and then performs that check for processes & subsequently performs the process kills and deletion on it, as I describe in the next option. Note that you'll need an account user name and password that has Admin privileges on the given computer, of course, to perform the deletion and ending of processes.
When you don't know if a file will be in use when you are trying to save it, you can close all processes that could be using it, like Word, if it's a Word document, ahead of the save.
If it is local, you can do this:
ProcessHandler.localProcessKill("winword.exe");
If it is remote, you can do this:
ProcessHandler.remoteProcessKill(computerName, txtUserName, txtPassword, "winword.exe");
where txtUserName is in the form of DOMAIN\user.
Let's say you don't know the process name that is locking the file. Then, you can do this:
List<Process> lstProcs = new List<Process>();
lstProcs = ProcessHandler.WhoIsLocking(file);
foreach (Process p in lstProcs)
{
if (p.MachineName == ".")
ProcessHandler.localProcessKill(p.ProcessName);
else
ProcessHandler.remoteProcessKill(p.MachineName, txtUserName, txtPassword, p.ProcessName);
}
Note that file must be the UNC path: \\computer\share\yourdoc.docx in order for the Process to figure out what computer it's on and p.MachineName to be valid.
Below is the class these functions use, which requires adding a reference to System.Management. The code was originally written by Eric J.:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Runtime.InteropServices;
using System.Diagnostics;
using System.Management;
namespace MyProject
{
public static class ProcessHandler
{
[StructLayout(LayoutKind.Sequential)]
struct RM_UNIQUE_PROCESS
{
public int dwProcessId;
public System.Runtime.InteropServices.ComTypes.FILETIME ProcessStartTime;
}
const int RmRebootReasonNone = 0;
const int CCH_RM_MAX_APP_NAME = 255;
const int CCH_RM_MAX_SVC_NAME = 63;
enum RM_APP_TYPE
{
RmUnknownApp = 0,
RmMainWindow = 1,
RmOtherWindow = 2,
RmService = 3,
RmExplorer = 4,
RmConsole = 5,
RmCritical = 1000
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
struct RM_PROCESS_INFO
{
public RM_UNIQUE_PROCESS Process;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_APP_NAME + 1)]
public string strAppName;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_SVC_NAME + 1)]
public string strServiceShortName;
public RM_APP_TYPE ApplicationType;
public uint AppStatus;
public uint TSSessionId;
[MarshalAs(UnmanagedType.Bool)]
public bool bRestartable;
}
[DllImport("rstrtmgr.dll", CharSet = CharSet.Unicode)]
static extern int RmRegisterResources(uint pSessionHandle,
UInt32 nFiles,
string[] rgsFilenames,
UInt32 nApplications,
[In] RM_UNIQUE_PROCESS[] rgApplications,
UInt32 nServices,
string[] rgsServiceNames);
[DllImport("rstrtmgr.dll", CharSet = CharSet.Auto)]
static extern int RmStartSession(out uint pSessionHandle, int dwSessionFlags, string strSessionKey);
[DllImport("rstrtmgr.dll")]
static extern int RmEndSession(uint pSessionHandle);
[DllImport("rstrtmgr.dll")]
static extern int RmGetList(uint dwSessionHandle,
out uint pnProcInfoNeeded,
ref uint pnProcInfo,
[In, Out] RM_PROCESS_INFO[] rgAffectedApps,
ref uint lpdwRebootReasons);
/// <summary>
/// Find out what process(es) have a lock on the specified file.
/// </summary>
/// <param name="path">Path of the file.</param>
/// <returns>Processes locking the file</returns>
/// <remarks>See also:
/// http://msdn.microsoft.com/en-us/library/windows/desktop/aa373661(v=vs.85).aspx
/// http://wyupdate.googlecode.com/svn-history/r401/trunk/frmFilesInUse.cs (no copyright in code at time of viewing)
///
/// </remarks>
static public List<Process> WhoIsLocking(string path)
{
uint handle;
string key = Guid.NewGuid().ToString();
List<Process> processes = new List<Process>();
int res = RmStartSession(out handle, 0, key);
if (res != 0) throw new Exception("Could not begin restart session. Unable to determine file locker.");
try
{
const int ERROR_MORE_DATA = 234;
uint pnProcInfoNeeded = 0,
pnProcInfo = 0,
lpdwRebootReasons = RmRebootReasonNone;
string[] resources = new string[] { path }; // Just checking on one resource.
res = RmRegisterResources(handle, (uint)resources.Length, resources, 0, null, 0, null);
if (res != 0) throw new Exception("Could not register resource.");
//Note: there's a race condition here -- the first call to RmGetList() returns
// the total number of process. However, when we call RmGetList() again to get
// the actual processes this number may have increased.
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, null, ref lpdwRebootReasons);
if (res == ERROR_MORE_DATA)
{
// Create an array to store the process results
RM_PROCESS_INFO[] processInfo = new RM_PROCESS_INFO[pnProcInfoNeeded];
pnProcInfo = pnProcInfoNeeded;
// Get the list
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, processInfo, ref lpdwRebootReasons);
if (res == 0)
{
processes = new List<Process>((int)pnProcInfo);
// Enumerate all of the results and add them to the
// list to be returned
for (int i = 0; i < pnProcInfo; i++)
{
try
{
processes.Add(Process.GetProcessById(processInfo[i].Process.dwProcessId));
}
// catch the error -- in case the process is no longer running
catch (ArgumentException) { }
}
}
else throw new Exception("Could not list processes locking resource.");
}
else if (res != 0) throw new Exception("Could not list processes locking resource. Failed to get size of result.");
}
finally
{
RmEndSession(handle);
}
return processes;
}
public static void remoteProcessKill(string computerName, string userName, string pword, string processName)
{
var connectoptions = new ConnectionOptions();
connectoptions.Username = userName;
connectoptions.Password = pword;
ManagementScope scope = new ManagementScope(#"\\" + computerName + #"\root\cimv2", connectoptions);
// WMI query
var query = new SelectQuery("select * from Win32_process where name = '" + processName + "'");
using (var searcher = new ManagementObjectSearcher(scope, query))
{
foreach (ManagementObject process in searcher.Get())
{
process.InvokeMethod("Terminate", null);
process.Dispose();
}
}
}
public static void localProcessKill(string processName)
{
foreach (Process p in Process.GetProcessesByName(processName))
{
p.Kill();
}
}
[DllImport("kernel32.dll")]
public static extern bool MoveFileEx(string lpExistingFileName, string lpNewFileName, int dwFlags);
public const int MOVEFILE_DELAY_UNTIL_REBOOT = 0x4;
}
}
I had this problem and it was solved by following the code below
var _path=MyFile.FileName;
using (var stream = new FileStream
(_path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Your Code! ;
}
I had a very specific situation where I was getting an "IOException: The process cannot access the file 'file path'" on the line
File.Delete(fileName);
Inside an NUnit test that looked like:
Assert.Throws<IOException>(() =>
{
using (var sr = File.OpenText(fileName) {
var line = sr.ReadLine();
}
});
File.Delete(fileName);
It turns out NUnit 3 uses something they call "isolated context" for exception assertions. This probably runs on a separate thread.
My fix was to put the File.Delete in the same context.
Assert.Throws<IOException>(() =>
{
try
{
using (var sr = File.OpenText(fileName) {
var line = sr.ReadLine();
}
}
catch
{
File.Delete(fileName);
throw;
}
});
I had the following scenario that was causing the same error:
Upload files to the server
Then get rid of the old files after they have been uploaded
Most files were small in size, however, a few were large, and so attempting to delete those resulted in the cannot access file error.
It was not easy to find, however, the solution was as simple as Waiting "for the task to complete execution":
using (var wc = new WebClient())
{
var tskResult = wc.UploadFileTaskAsync(_address, _fileName);
tskResult.Wait();
}
In my case this problem was solved by Opening the file for Shared writing/reading. Following are the sample codes for shared reading and writing:-
Stream Writer
using(FileStream fs = new FileStream("D:\\test.txt",
FileMode.Append, FileAccess.Write, FileShare.ReadWrite))
using (StreamWriter sw = new StreamWriter(fs))
{
sw.WriteLine("any thing which you want to write");
}
Stream Reader
using (FileStream fs = new FileStream("D:\\test.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (StreamReader rr=new StreamReader(fs))
{
rr.ReadLine())
}
My below code solve this issue, but i suggest
First of all you need to understand what causing this issue and try the solution which you can find by changing code
I can give another way to solve this issue but better solution is to check your coding structure and try to analyse what makes this happen,if you do not find any solution then you can go with this code below
try{
Start:
///Put your file access code here
}catch (Exception ex)
{
//by anyway you need to handle this error with below code
if (ex.Message.StartsWith("The process cannot access the file"))
{
//Wait for 5 seconds to free that file and then start execution again
Thread.Sleep(5000);
goto Start;
}
}

Delphi 4 Python

I want to use the Delphi 4 Python components from here https://github.com/pyscripter/python4delphi
but I don't want to drop the components on a form, I want everything in code , my code goes like this :
var
PythonEngine_netA: TPythonEngine;
PythonInputOutput_netA: TPythonInputOutput;
begin
PythonEngine_netA := TPythonEngine.Create(Self);
PythonInputOutput_netA := TPythonInputOutput.Create(Self);
try
/// configure the components
PythonEngine_netA.DllName:='python39.dll';
PythonEngine_netA.IO := PythonInputOutput_netA;
PythonEngine_netA.UseLastKnownVersion := True;
PythonInputOutput_netA.OnSendUniData := PythonInputOutput_SendUniData;
PythonInputOutput_netA.UnicodeIO := True;
PythonInputOutput_netA.RawOutput := True;
/// execute the script
PythonEngine_netA.ExecString(UTF8Encode(mmo_pythoncode.text));
finally
PythonEngine_netA.free;
PythonInputOutput_netA.free;
end;
execution of this code fails, error msg : "Python is not properly initialized",
what did I miss to use this code ?
One quick look at PythonEngine.pas (or even better: always search all files for the error message to find out where and why an error is returned) tells me you missed calling PythonEngine_netA.Initialize().
Also note that /Demos describes:
Demo34 Dynamically creating, destroying and recreating PythonEngine. Uses PythonVersions
So please have a look at /Demos/Demo34/Unit1.pas how it is done there with (almost) no components. Or run the whole project in general, preferably in debug mode single stepping thru it be aware which method does what.
You just forgot to load the Dll:
PythonEngine_netA.UseLastKnownVersion:= True;
//PythonEngine_netA.opendll(PYDLL)
PythonEngine_netA.LoadDll;
PythonEngine_netA.IO:= PythonInputOutput_netA;

C++ Python Not Always Executing Python Script

I currently have a piece of hardware connected to C++ code using the MFC (Windows programming) framework. Basically the hardware is passing in image frames to my C++ code. In my C++ code, I am then calling a Python script using the CPython (Python embedding in C++) API to execute a model on that image. I've been noticing some weird behavior with the images though.
My C++ code is executing my Python script perfectly until some frame in the range of 80-90. After that point, my C++ code, for some reason, just stops executing the Python script. Despite that, the C++ code is still running normally - EXCEPT for the fact (which I just stated) that it's not executing the Python script.
Something to note: my Python script takes 5 seconds to execute the FIRST time, but then only 0.02 seconds to execute each frame after that first frame (I think due to the model getting set up).
At first, I thought it was a problem with the speed, so I replaced all my Python code with just a "time.sleep()" call with varying time, and, even if I sleep 5 seconds each C++ call to Python still always gets executed. As a result, I don't think it's a matter of the total time. For instance, if I do "time.sleep(1)" which sleeps for a second (which is longer than my Python script execution time AFTER the first frame), my Python script still always gets executed.
Does anyone have any idea why this might be happening? Could it be because of the uneven running times? Since it's taking 5 seconds to run the first frame and then significantly faster for each frame after that. Could it be that the Python is somehow unable to catch up after that time period?
This is my first time executing C++/Python on hardware, so I'm also new to this. Any helps would be greatly appreciated!
To give some idea of my code, here is a snippet:
if (pFuncFrame && PyCallable_Check(pFuncFrame)) {
PyObject* pArgs = PyTuple_New(1);
PyTuple_SetItem(pArgs, 0, PyUnicode_FromString("img.bmp"));
PyObject_CallObject(pFuncFrame, pArgs);
std::cout << "Called the frame function";
}
else {
std::cout << "Did not get the frame function";
}
I'm willing to bet that the first execution ends in a Python exception which isn't cleared until you execute some new Python statement in the second iteration, which therefore fails immediately. I recommend fixing the memory leaks and adding some error handling code to get some diagnostics (which will be useful either which way). For example (haven't tried, since you didn't provide a compilable example, but the following shouldn't be too far off):
if (pFuncFrame && PyCallable_Check(pFuncFrame)) {
PyObject* pArgs = PyTuple_New(1);
PyTuple_SetItem(pArgs, 0, PyUnicode_FromString("img.bmp"));
PyObject* res = PyObject_CallObject(pFuncFrame, pArgs);
if (!res) {
if (PyErr_Occurred()) PyErr_Print();
else std::cerr << "Python exception without error set\n";
} else {
Py_DECREF(res);
std::cout << "Called the frame function";
}
Py_DECREF(pArgs);
}
else {
std::cout << "Did not get the frame function";
}

"Obtaining file position failed" while using ndarray.tofile() and numpy.fromfile() with FIFOs

I am writing some dummy code to learn how FIFOs work in python, (and later use them in my ongoing projects). When I am trying to write, or read from it, I am getting the "OSError: obtaining file position failed" message.
I am trying to transport complex datas between two python codes. I am using FIFOs, because I will need more different channels, to communicate between running modules. I am running them with the bash script, that you can see below.
#first.py
import numpy as np
data = np.complex64([1, 2, 3])
fifo = open("fifoka", "wb")
data.tofile(fifo)
fifo.flush()
fifo.close()
#second.py
import numpy as np
fifo = open("fifoka", "rb")
data = np.fromfile(fifo, dtype=np.complex64)
fifo.close()
print(data)
#!/bin/bash
mkfifo fifoka
python3 first.py | \
python3 second.py
rm fifoka
If I use fifo.write(data.tobytes()) instead of the data.tofile(fifo), then it is working fine, but according to the spec it should work the same way.
I have the same problem when I try to read from the same fifo, so I think I am doing the same mistake.
So my question is, how I should use the np.fromfile() and the ndarray.tofile() correctly in this case.
Seems to be a defect in numpy. I just ran into this moving some code from Py2.7, numpy version unknown, to Py3.8.2, numpy 1.17.4. Changing arr.tofile(fout) to fout.write(arr.tobytes()) made it work.
Better solution : if the array you want to write is contiguous, use fout.write(memoryview(arr)), which avoids a copy. The utility below will use memoryview if possible, otherwise tobytes.
def arr_tofile(a,outfile):
if a.flags['C_CONTIGUOUS']:
b = memoryview(a)
else:
b = a.tobytes()
outfile.write(b)
I've been using arr.tofile(f) via pipe to send data to other processes under 2.x for over a decade, so it's a new defect, and maybe specific to python 3.x ?
It's true that the docs say that tofile bypasses the write method and uses the fd - but there's no reason to seek or tell in order to perform tofile, no reason it should not work on a pipe.
UPDATE - it doesn't work in python3 because of a compatibility hack
For python2 operation, the procedure in tofile was to obtain FILE* from the python file object; then to flush that file, obtain the underlying file number, and then write to that using the PyArray_ToFile function.
Apparently this was broken by Python3, due to internal buffering added in the file object.
So now, it works by calling npy_PyFile_Dup2 to make a FILE* from a python file object, Then, the write is done to the fd from that FILE*, using the function PyArray_ToFile. Finally, npy_PyFile_DupClose2(file, fd, orig_pos) is used to close the new file and seek the original one to the same position. For python2, the npy_PyFile_Dup2 and npy_PyFile_DupClose2 are defined as basically stubs; a single underlying FILE* is used as described above.
In Python3 they do a lot more. I didn't go through it all, but npy_PyFile_Dup2 actually calls os.dup using python mechanisms to make a new file handle, and, after the write, npy_PyFile_DupClose2 does a 'tell' on the new file, followed by a 'seek' on the original python file to skip over the data just written on the other handle.
Bottom line - this is a bit of a mess - under python3, it's better to avoid using arr.tofile(file), even when it works, unless the size of the write is large enough to justify fairly substantial amount of overhead. And it won't work on a non-seekable file at all. Use file.write(memoryview(arr)) or file.write(arr.tobytes()) instead, in both cases.
Obvious Next Question - can this be fixed to work on pipes?
Maybe - it would rely on being able to detect that the output is a pipe, and in that case to flush the python file object and proceed to write to its file handle (as in the python 2 approach). When writing to a pipe there's no need to support a subsequent 'tell' on the output file.
https://github.com/numpy/numpy/blob/2f70544179e24b0ebc0263111f36e6450bbccf94/doc/source/release/1.8.1-notes.rst
Deprecations
C-API
The utility function npy_PyFile_Dup and npy_PyFile_DupClose are broken by the internal buffering python 3 applies to its file objects. To fix this two new functions npy_PyFile_Dup2 and npy_PyFile_DupClose2 are declared in npy_3kcompat.h and the old functions are deprecated. Due to the fragile nature of these functions it is recommended to instead use the python API when possible.
https://github.com/numpy/numpy/blob/382758355998951cea2b9f6ad1fb83e7dc4c3a02/numpy/core/src/multiarray/methods.c
PyObject *file = (from param)
FILE *fd;
npy_off_t orig_pos = 0;
fd = npy_PyFile_Dup2(file, "wb", &orig_pos);
if (fd == NULL) {
goto fail;
}
if (PyArray_ToFile(self, fd, sep, format) < 0) {
goto fail;
}
if (npy_PyFile_DupClose2(file, fd, orig_pos) < 0) {
goto fail;
}
if (own && npy_PyFile_CloseFile(file) < 0) {
goto fail;
}
Py_DECREF(file);
Py_RETURN_NONE;
https://github.com/numpy/numpy/blob/64fb290a8cb8fa9201f18015f3de1186e950a137/numpy/core/include/numpy/npy_3kcompat.h
/*
* Get a FILE* handle to the file represented by the Python object
*/
static NPY_INLINE FILE*
npy_PyFile_Dup2(PyObject *file, char *mode, npy_off_t *orig_pos)
{
int fd, fd2, unbuf;
PyObject *ret, *os, *io, *io_raw;
npy_off_t pos;
FILE *handle;
/* For Python 2 PyFileObject, use PyFile_AsFile */
#if !defined(NPY_PY3K)
if (PyFile_Check(file)) {
return PyFile_AsFile(file);
}
#endif
/* Flush first to ensure things end up in the file in the correct order */
[[[... continue to call os.dup via python interface....]]]
}
...
/*
* Close the dup-ed file handle, and seek the Python one to the current position
*/
static NPY_INLINE int
npy_PyFile_DupClose2(PyObject *file, FILE* handle, npy_off_t orig_pos)
{
int fd, unbuf;
PyObject *ret, *io, *io_raw;
npy_off_t position;
/* For Python 2 PyFileObject, do nothing */
#if !defined(NPY_PY3K)
if (PyFile_Check(file)) {
return 0;
}
#endif
position = npy_ftell(handle);
/* Close the FILE* handle */
fclose(handle);
....[[more]]...
I think this is due to how numpy.ndarray.tofile works.
from the docs
When fid is a file object, array contents are directly written to the
file, bypassing the file object’s write method. As a result, tofile
cannot be used with files objects supporting compression (e.g.,
GzipFile) or file-like objects that do not support fileno() (e.g.,
BytesIO).
It is not possible to seek in a FIFO.
On a system-call level, lseek(2) is also used to get the file position. So that doesn't work either.

Stop embedded python

I'm embedding python in a C++ plug-in. The plug-in calls a python algorithm dozens of times during each session, each time sending the algorithm different data. So far so good
But now I have a problem:
The algorithm takes sometimes minutes to solve and to return a solution, and during that time often the conditions change making that solution irrelevant. So, what I want is to stop the running of the algorithm at any moment, and run it immediately after with other set of data.
Here's the C++ code for embedding python that I have so far:
void py_embed (void*data){
counter_thread=false;
PyObject *pName, *pModule, *pDict, *pFunc;
//To inform the interpreter about paths to Python run-time libraries
Py_SetProgramName(arg->argv[0]);
if(!gil_init){
gil_init=1;
PyEval_InitThreads();
PyEval_SaveThread();
}
PyGILState_STATE gstate = PyGILState_Ensure();
// Build the name object
pName = PyString_FromString(arg->argv[1]);
if( !pName ){
textfile3<<"Can't build the object "<<endl;
}
// Load the module object
pModule = PyImport_Import(pName);
if( !pModule ){
textfile3<<"Can't import the module "<<endl;
}
// pDict is a borrowed reference
pDict = PyModule_GetDict(pModule);
if( !pDict ){
textfile3<<"Can't get the dict"<<endl;
}
// pFunc is also a borrowed reference
pFunc = PyDict_GetItemString(pDict, arg->argv[2]);
if( !pFunc || !PyCallable_Check(pFunc) ){
textfile3<<"Can't get the function"<<endl;
}
/*Call the algorithm and treat the data that is returned from it
...
...
*/
// Clean up
Py_XDECREF(pArgs2);
Py_XDECREF(pValue2);
Py_DECREF(pModule);
Py_DECREF(pName);
PyGILState_Release(gstate);
counter_thread=true;
_endthread();
};
Edit: The python's algorithm is not my work and I shouldn't change it
This is based off of a cursory knowledge of python, and reading the python docs quickly.
PyThreadState_SetAsyncExc lets you inject an exception into a running python thread.
Run your python interpreter in some thread. In another thread, PyGILState_STATE then PyThreadState_SetAsyncExc into the main thread. (This may require some precursor work to teach the python interpreter about the 2nd thread).
Unless the python code you are running is full of "catch alls", this should cause it to terminate execution.
You can also look into the code to create python sub-interpreters, which would let you start up a new script while the old one shuts down.
Py_AddPendingCall is also tempting to use, but there are enough warnings around it maybe not.
Sorry, but your choices are short. You can either change the python code (ok, plugin - not an option) or run it on another PROCESS (with some nice ipc between). Then you can use the system api to wipe it out.
So, I finally thought of a solution (more of a workaround really).
Instead of terminating the thread that is running the algorithm - let's call it T1 -, I create another one -T2 - with the set of data that is relevant at that time.
In every thread i do this:
thread_counter+=1; //global variable
int thisthread=thread_counter;
and after the solution from python is given I just verify which is the most "recent", the one from T1 or from T2:
if(thisthread==thread_counter){
/*save the solution and treat it */
}
Is terms of computer effort this is not the best solution obviously, but it serves my purposes.
Thank you for the help guys
I've been thinking about this problem, and I agree that sub interpreters may provide you one possible solution https://docs.python.org/2/c-api/init.html#sub-interpreter-support. It supports calls for creating new interpreters and ending existing ones. The bugs & caveats sections describes some issues that depending on your architecture may or may not present a problem.
Another possible solution is to use the python multiprocessing module, and within your worker thread test a global variable (something like time_to_die). Then from the parent, you grab the GIL, set the variable, release the GIL and wait for the child to finish.
But then another idea ocurred to me. Why not just use fork(), init your python interpreter in the child and when the parent decides it's time for the python thread to end, just kill it. Something like this:
void process() {
int pid = fork();
if (pid) {
// in parent
sleep(60);
kill(pid, 9);
}
else{
// in child
Py_Initialize();
PyRun_SimpleString("# insert long running python calculation");
}
}
(This example assumes *nix, if you're on windows, substitute CreateProcess()/TerminateProcess())

Categories

Resources