Is Python open(file,vr) supposed to update atime? - python

Whenever I open() a file with Python, the last access time is not updated, that's very odd :
If I open with r/rb nothing changes if I stat the file
If I open with w/r+ or a the ctime and mtime update properly but not atime
It doesn't look like it is a filesystem problem (which is ext3 in this case) because if I touch or cat the file it does update properly.
I haven't been able to find a lot of information about it; is it supposed to behave this way or is there something wrong?

Please try running mount, and see, if noatime flag is used on the mounted fs. Also, if your kernel is fresh enough, it's the "relatime" that is set by-default.
The "open()" code is pretty self-explanatory and does not mess with ATIME flags:
/* >> fileutils.c from Python 3.2.3 */
FILE*
_Py_fopen(PyObject *path, const char *mode)
{
#ifdef MS_WINDOWS
wchar_t wmode[10];
int usize;
usize = MultiByteToWideChar(CP_ACP, 0, mode, -1, wmode, sizeof(wmode));
if (usize == 0)
return NULL;
return _wfopen(PyUnicode_AS_UNICODE(path), wmode);
#else
FILE *f;
PyObject *bytes = PyUnicode_EncodeFSDefault(path);
if (bytes == NULL)
return NULL;
/* >> Plain fopen(), nothing fancy here. */
f = fopen(PyBytes_AS_STRING(bytes), mode);
Py_DECREF(bytes);
return f;
#endif
}

Related

Is it C++ for loop had require run time?

I had one c++ program that inside for loop, calling a function.
The function is doing a heavy process, it is embedded with python and performing image processing.
My question is, why can it only run at the first instance of the variable?
Main function (I only show the part of code require in this title):
int main(){
for(int a = 0;a<5;a++){
for(int b=0;b<5;b++){
// I want every increment it go to PyRead() function, doing image processing, and compare
if(PyRead()==1){
// some application might be occur
}
else {
}
}
}
PyRead() function, the function in c++ to go into python environment performing image processing:
bool PyRead(){
string data2;
Py_Initialize();
PyRun_SimpleString("print 'hahahahahawwwwwwwwwwwww' ");
char filename[] = "testcapture";
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append(\".\")");
PyObject * moduleObj = PyImport_ImportModule(filename);
if (moduleObj)
{
PyRun_SimpleString("print 'hahahahaha' ");
char functionName[] = "test";
PyObject * functionObj = PyObject_GetAttrString(moduleObj, functionName);
if (functionObj)
{
if (PyCallable_Check(functionObj))
{
PyObject * argsObject = PyTuple_New(0);
if (argsObject)
{
PyObject * resultObject = PyEval_CallObject(functionObj, argsObject);
if (resultObject)
{
if ((resultObject != Py_None)&&(PyString_Check(resultObject)))
{
data2 = PyString_AsString(resultObject);
}
Py_DECREF(resultObject);
}
else if (PyErr_Occurred()) PyErr_Print();
Py_DECREF(argsObject);
}
}
Py_DECREF(functionObj);
}
else PyErr_Clear();
Py_DECREF(moduleObj);
}
Py_Finalize();
std::cout << "The Python test function returned: " << data2<< std::endl;
cout << "Data2 \n" << data2;
if(compareID(data2) == 1)
return true;
else
return false;
}
This is second time I ask this question in stack overflow. I hope this time this question will be more clear!
I can successful compile with no error.
When I run the program, I realize at a=0, b=0 it will go to PyRead() function and return value, after that it go to a=0, b=1, at that moment the whole program will end.
It supposes to go to PyRead() function again, but it does not do that and straight ending the program.
I must strongly mention that PyRead() function needed a long time to run (30seconds).
I had no idea what happens, seeking for somehelp. Please focus on the Bold part to understand my question.
Thanks.
See the comment in https://docs.python.org/2/c-api/init.html#c.Py_Finalize
Ideally, this frees all memory allocated by the Python interpreter.
Dynamically loaded extension modules loaded by Python are not unloaded.
Some extensions may not work properly if their initialization routine is called more than once
It seems your module, does not play well with this function.
A workaround can be - create the script on the fly and call it with python subprocess.

Limit Zbar to QR code only in Python

I'm using Zbar with it's Processor option in Python. I've been trying to figure out how to limit the symbology to QR-code only, but have only found answers for C as it follows:
scanner = new ImageScanner();
scanner.setConfig(Symbol.QRCODE, Config.ENABLE, 1);
I understand that the original code is written for C but is there anyway to do it in Python? Python isn't my main language and it's a bit difficult for me to understand what the arguments are in this case for processor.parse_config() (which I have currently set to 'enable'):
From https://github.com/npinchot/zbar/blob/master/processor.c
static PyObject*
processor_parse_config (zbarProcessor *self,
PyObject *args,
PyObject *kwds)
{
const char *cfg = NULL;
static char *kwlist[] = { "config", NULL };
if(!PyArg_ParseTupleAndKeywords(args, kwds, "s", kwlist, &cfg))
return(NULL);
if(zbar_processor_parse_config(self->zproc, cfg)) {
PyErr_Format(PyExc_ValueError, "invalid configuration setting: %s",
cfg);
return(NULL);
}
Py_RETURN_NONE;
}
I don't even understand why 'enable' is a valid argument.
Took me some time to figure this out since there's no documentation and the config format is counter-intuitive, IMO, but here you go:
proc.parse_config('disable')
proc.parse_config('qrcode.enable')
The first line, disable, disables all scanners.
The second line enables the qrcode scanner.

C program with embedded Python: How to restrict process to not open files nor sockets?

I like to forbid my C program certain rights, permissions or capabilities, e.g. to open any files (other than stdin, stdout, stderr) or any sockets, ideally even if run as root. The reason is, that the program embeds a Python interpreter and might run untrusted code. Simplified version:
int main(int argc, char** argv)
{
/* TODO: drop all rights/permissions/capabilites
to open files or sockets here! */
Py_Initialize();
PyRun_SimpleString(argv[1]);
Py_Finalize();
}
This has to work with Python 2.6 on Linux 3.2. Any ideas?
Maybe I found the answer on my own. Comments highly desired!
I'm trying to use the seccomp library to disallow all, but certain syscalls.
It seems to work, i.e. in my naïve tests I can read from stdin, write to stdout, but cannot open files via Python.
#include <stdio.h>
#include <seccomp.h>
#include <python2.7/Python.h>
#define ERR_EXIT(err) do { \
fprintf(stderr, "%s near line %d\n", strerror(-err), __LINE__); \
exit(-1); } while (0);
int main(int argc, char** argv)
{
int i;
scmp_filter_ctx ctx;
int err;
Py_Initialize();
/* return illegal calls with error */
if (!(ctx = seccomp_init(SCMP_ACT_ERRNO(1)))) {
ERR_EXIT(1);
}
/* allow write, but only to stdout */
if ((err = seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(write),
1, SCMP_A0(SCMP_CMP_EQ, STDOUT_FILENO)))) {
ERR_EXIT(err);
}
/* allow read, but only from stdin */
if ((err = seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(read),
1, SCMP_A0(SCMP_CMP_EQ, STDIN_FILENO)))) {
ERR_EXIT(err);
}
/* brk, exit, exit_group, and rt_sigaction are needed by Python */
if ((err = seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(brk), 0))) {
ERR_EXIT(err);
}
if ((err = seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(exit), 0))) {
ERR_EXIT(err);
}
if ((err = seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(exit_group), 0))) {
ERR_EXIT(err);
}
if ((err = seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(rt_sigaction), 0))) {
ERR_EXIT(err);
}
if ((err = seccomp_load(ctx))) {
ERR_EXIT(err);
}
for (i = 1; i < argc; i++) {
PyRun_SimpleString(argv[i]);
}
Py_Finalize();
return 0;
}
I very much appreciate any critique on this approach, thanks!
Maybe have a look at this or search for "restricted Python" elsewhere. You might be able to wrap the untrusted code such that it runs in a restricted environment.
You can use SELinux or AppArmor to restrict rights of an application on Linux.
In Unix/Linux, you can limit many resources using the command limit from the command line.
% limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 10240 kbytes
coredumpsize 0 kbytes
memoryuse unlimited
vmemoryuse unlimited
descriptors 4096
memorylocked 64 kbytes
maxproc 1024
Thus, you can limit the number file descriptors open (which both limits the number of sockets you can open and the number of files; unfortunately, it doesn't distinguish between the two types offile descriptors), or how many processes may be forked (maxproc), or how big are files you can create.
I believe that limits are inherited, so if you start a shell with certain limits restricted, all the limits you change will be inherited by any new processes invoked. I.e., you could limit what a user can do by forcing him into an environment where these limits are set.
This may not quite be what you are looking for, but it is a way to do some coarse limiting from the command line.
Just for completeness: One can certainly use systemd-nspawn, but my target system still runs upstart. At some later point I will certainly look into this solution, maybe combined with seccomp and setrlimit.

Invalid Pointer Error when using free()

I am writing a Python Extension in C (on Linux (Ubuntu 14.04)) and ran into an issue with dynamic memory allocation. I searched through SO and found several posts on free() calls causing similar errors because free() tries to free memory that is not dynamically allocated. But I don't know if/how that is a problem in the code below:
#include <Python.h>
#include <stdio.h>
#include <stdlib.h>
static PyObject* tirepy_process_data(PyObject* self, PyObject *args)
{
FILE* rawfile = NULL;
char* rawfilename = (char *) malloc(128*sizeof(char));
if(rawfilename == NULL)
printf("malloc() failed.\n");
memset(rawfilename, 0, 128);
const int num_datapts = 0; /* Just Some interger variable*/
if (!PyArg_ParseTuple(args, "si", &rawfilename, &num_datapts)) {
return NULL;
}
/* Here I am going top open the file, read the contents and close it again */
printf("Raw file name is: %s \n", rawfilename);
free(rawfilename);
return Py_BuildValue("i", num_profiles);
}
The output is:
Raw file name is: \home\location_to_file\
*** Error in `/usr/bin/python': free(): invalid pointer: 0xb7514244 ***
According to the documentation:
These formats allow to access an object as a contiguous chunk of memory. You don’t have to provide raw storage for the returned unicode or bytes area. Also, you won’t have to release any memory yourself, except with the es, es#, et and et# formats.
(Emphasis is added by me)
So you do not need to first allocate memory with malloc(). You also do not need to free() the memory afterwards.
Your error occurs because you are trying to free memory that was provided/allocated by Python. So C (malloc/free) is unable to free it, as it is unknown to the C runtime.
Please consider the API docs for `PyArg_ParseTuple': https://docs.python.org/2/c-api/arg.html
You shall NOT pass a pointer to allocated memory, nor shall you free it afterwards.

Python, C: redirected stdout fires [Errno 9]

I try to log all the output of a program written in Python and C. However, printing from Python causes IOError: [Errno 9] Bad file descriptor
Please, does anyone know what the problem is and how to fix it?
PS: It's on Windows XP, Python 2.6 and MinGW GCC
#include <windows.h>
#include <fcntl.h>
#include "Python.h"
int main()
{
int fds[2];
_pipe(fds, 1024, O_BINARY);
_dup2(fds[1], 1);
setvbuf(stdout, NULL, _IONBF, 0);
/* alternative version: */
// HANDLE hReadPipe, hWritePipe;
// int fd;
// DWORD nr;
// CreatePipe(&hReadPipe, &hWritePipe, NULL, 0);
// fd = _open_osfhandle((intptr_t)hWritePipe, _O_BINARY);
// _dup2(fd, 1);
// setvbuf(stdout, NULL, _IONBF, 0);
write(1, "write\n", 6);
printf("printf\n");
Py_Initialize();
PyRun_SimpleString("print 'print'"); // this breaks
Py_Finalize();
char buffer[1024];
fprintf(stderr, "buffer size: %d\n", read(fds[0], buffer, 1024)); // should always be more than 0
/* alternative version: */
// CloseHandle(hWritePipe);
// char buffer[1024];
// ReadFile(hReadPipe, buffer, 1024, &nr, NULL);
// fprintf(stderr, "buffer size: %d\n", nr); // should always be more than 0
}
I think it could be to do with different C runtimes. I know you can't pass file descriptors between different C runtimes - Python is built with MSVC (you will need to check which version) - so you could try to make MinGW build against the same C runtime - I think there is options to do this in MinGW like -lmsvcrt80 (or whichever is the appropriate versions) but for licensing reasons they can't distribute the libraries so you will have to find them on your system. Sorry I don't have any more details on that for now, but hopefully its a start for some googling.
A simpler way would be to just do it all in Python... just make a class which exposes a write and perhaps flush method and assign it to sys.stdout. Eg for a file you can just pass an open file object - it's probably straightfoward to do a similar thing for your pipe. Then just import it and sys and set sys.stdout in a PyRun_SimpleString.

Categories

Resources