I am currently working on a python program for finding flaky tests by running them multiple times. To achieve this goal, I'm executing the tests in a virtualenv in random order using pytest.
When I execute the program on a remote machine via slurm job, I get following error codes:
2019-11-26 18:18:18,642 - CRITICAL - Failed to configure container: [Errno 1] Creating overlay mount for '/' failed: Operation not permitted. Please use other directory modes, for example '--read-only-dir /'.
2019-11-26 18:18:18,777 - CRITICAL - Cannot execute 'pytest': execution in container failed.
This doesn't happen on my local machine, only on the task startet via the slurm job.
This is my first time working with python at this complexity, so I'm not really sure where to start solving the problem.
Thanks a lot in advance!
I finally figured out that the problem only occurs on the newest version of benchexec.
When my python program executes run exec --no-container --pytest inside a virtualenv and benchexec is version 2.0 or higher, the error message displayed in my original post shows up. I simply tell pip to install an older version of benchexec in my virtualenv and voila it works.
I would've created the tag benchexec but don't have the necessary credit points for doing so. Feel free to do it for me!
Related
I have a CodeDeploy which deploys application on Windows instances. I have a Python script which is running as part of ValidateService hooks. Below is the code I have in that script:
print("hello")
So, I have removed everything and just printing hello as part of this script. When this script is called by CodeDeploy I get below error:
My appspec.yml file:
...
ValidateService:
- location: scripts/verify_deployment.py
timeout: 900
I tried getting some help on Google but got nothing. Can someone please help me here.
Thanks
As Marcin already answered in a comment, I don't think you can simply run python scripts in CodeDeploy. At least not natively.
The error you see means that Windows does not know how to execute the script you have provided. AFAIK Windows can't run python natively (like most linux distros can).
I am not very accustomed to CodeDeploy, but given the example at https://github.com/aws-samples/aws-codedeploy-samples/tree/master/applications/SampleApp_Windows, I think you have to install python first.
After so much of investigations, I found my answer. The issue is little misleading, there is nothing to do with Code format or ENOEXEC. The issue was due to Python path. While executing my script, CodeDeploy was unable to find Python (Though I had already added python.exe in Environment variable path).
Also, I found that CodeDeploy is unable to execute .py file due to Python path issue. So, I created a PowerShell script and invoking Python script from there. Like below:
C:\Users\<username>\AppData\Local\Programs\Python\Python37-32\python.exe C:\Users\<username>\Documents\verify_deployment.py
It executed Python script successfully and gave me below output:
hello
I hope you can help me, I've been looking for an answer but I couldn't find something concrete, thats why I'm creating this topic.
I'm having issues trying to execute a PythonScript from AzureDevops Pipelines, the script is supposed to validate the pom.xml version against the artifact in Azure Artifacts.. I have tested this locally in the same machine where the Agent is Running and is working, but whenever the pipeline runs, and try to execute my script from path option, it returns an ENOENT Error, I have enabled System.Debug option but I didn't see anything wrong, I also tested the same command on the host and It worked.. Also tried with different version of Python with any results..
This is the error I'm getting:
[command]/usr/bin/python /home/azdevops/azagent/_work/2933/templates/azure-devops/anka/lib/helper-functions.py getPomVersion hpsais-izipay-webview/pom.xml
##[error]There was an error when attempting to execute the process '/usr/bin/python'. This may indicate the process failed to start.
Error: spawn /usr/bin/python ENOENT
My host is:
Debian GNU/Linux 9 (stretch)
I also have this same task in a Release Pipeline but the difference is that this one is an inline script and is working properly.
Thanks in advance!
Currently I am facing a very confusing issue.
I am executing a job from (Rundeck) to (remote windows machine) using winrm as executor and file copier, which execute an inline powershell script.
Tried and worked fine on one of 3 environments.
On the Preprod, and Prod Rundeck.. Same job (exported/imported), fails, though same setting on the 3 environment, same script, same args, even same windows version.
I added a WINRM Check connection step, and it succeeds.
Rundeck manage to throw the script on the machine (with wrong name however), which means authentication is going well. However, it fails with this abstract error.
[ERROR ] Execution finished with the following error (winrm-exec.py:304)[root]
[ERROR ] The parameter is incorrect. (extended fault data: {u'fault_subcode': 'w:InvalidParameter', u'fault_code': 's:Sender', u'wsmanfault_code': '87', 'transport_message': u'Bad HTTP response returned from server. Code 500', 'http_status_code': 500}) (winrm-exec.py:305)[root]
[WinRMPython]: result code: 1, success: false
Failed: NonZeroResultCode: [WinRMPython] Result code: 1
When I try to execute the thrown Powershell script locally from the machine it works well.
WinRM plugin version: 2.0.9
Python: 2.7.17
Switching to python3 in WINRM resolved the problem. However, it caused problem for Windows server 2008 and older version.
So if you have both Windows OS version [pre and post 2008], you would need to split RUNDECK projects, and have winrm available on both python2 and python3.
For future references, the solution is here. Switching to Python 3 interpreter (instead of Python 2) on Default Node Executor and Default File Copier (Project Settings > Edit Configuration > Default Node Executor and Default File copier tabs) solves the issue.
I'm working on a Linux machine running Ubuntu Bionic Beaver, release 18.04.
The other day I mistakenly changed the /usr/ directory to be owned by a user, instead of root. Unfortunately, I did that recursively, and so messed quite a bit of the system up because it also changed the suid permissions on some of the commands (e.g. passwd, sudo). We really can't reinstall (well we can but it'll cost!), so I booted from a LiveUSB, and changed manually all the correct user/group/permissions for each file that I could identify had a non-Root:Root User:Group. I did this by comparing the output of another Ubuntu computer of ls -lha /usr/.
It seems to be mostly fixed, but now I'm running into the error 'std::bad_alloc' after running some pretty standard python scripts. The strange part about this is that it only comes up sometimes. For example, if I open python from the command line and copy and paste code, the code will all run fine with no error. However if I run the entire script from the command line (e.g. python script.py) then I get this error. The full error message is:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
But to add another twist - sometimes I can run the same python script from the command line with no problem, and others I get this error as above.
If anybody has ideas as to where to specifically look to fix this that'd be great! I'm going to try and do the same thing as before but with the ls -lha /usr/ output from an 18.04 release, as I only had a 16 release output on hand.
Ran into additional issue with Supervisord.
Centos 6.5
supervisor
python 2.6 installed with the OS
python 2.7 installed in /usr/local/bin
supervisord program settings
[program:inf_svr]
process_name=inf_svr%(process_num)s
directory=/opt/inf_api/
environment=USER=root,PYTHONPATH=/usr/local/bin/
command=python2.7 /opt/inf_api/inf_server.py --port=%(process_num)s
startsecs=2
user=root
autostart=true
autorestart=true
numprocs=4
numprocs_start=8080
stderr_logfile = /var/log/supervisord/tornado-stderr.log
stdout_logfile = /var/log/supervisord/tornado-stdout.log
I can run inf_server.py with:
python2.7 inf_server.py --port=8080
with no problems.
I made sure the files were executable (that was my problem before).
Any thoughts?
UPDATE:
I cant get it to even launch a basic python script without failing.
Started by commenting out the old program, adding a new one and then putting in:
command=python /opt/inf_api/test.py
where test.py just writes something to the screen and to a file. Fails with exit status 0.
So I started adding back in the location of python (after discovering it with 'which python')
environment=PYTHONPATH=/usr/bin
Tried putting the path in single quote, tried adding USER=root, to the environment, tried adding
directory=opt/inf_api/
tried adding
user=root
All the same thing, exit status 0. Nothing seems to added to any log files either, except what Im seeing from the debug of supervisord.
Man I am at a loss.
This turns out to be an issue with how Supervisord is catching error messages from python. As in it isnt. Im running it to launch a tornado app, that calls a second python file so it can spawn n instances of tornado servers. If there are errors in that second python app, then it isnt catching them and saving them to the log files. I tried all manner of methods but ended up having to catch them myself with try: except: and saving it to my own log files. Probably good paractice anyway but talk about a round about way of going about it.