I hope you can help me, I've been looking for an answer but I couldn't find something concrete, thats why I'm creating this topic.
I'm having issues trying to execute a PythonScript from AzureDevops Pipelines, the script is supposed to validate the pom.xml version against the artifact in Azure Artifacts.. I have tested this locally in the same machine where the Agent is Running and is working, but whenever the pipeline runs, and try to execute my script from path option, it returns an ENOENT Error, I have enabled System.Debug option but I didn't see anything wrong, I also tested the same command on the host and It worked.. Also tried with different version of Python with any results..
This is the error I'm getting:
[command]/usr/bin/python /home/azdevops/azagent/_work/2933/templates/azure-devops/anka/lib/helper-functions.py getPomVersion hpsais-izipay-webview/pom.xml
##[error]There was an error when attempting to execute the process '/usr/bin/python'. This may indicate the process failed to start.
Error: spawn /usr/bin/python ENOENT
My host is:
Debian GNU/Linux 9 (stretch)
I also have this same task in a Release Pipeline but the difference is that this one is an inline script and is working properly.
Thanks in advance!
Related
I am attempting to write a python script that calls rsync to synchronize code between my vm and my local machine. I had done this successfully using subprocess.call to write from my local machine to my vm, but when I tried to write a command that would sync from my vm to my local machine, subprocess.call seems to erroneously duplicate text or modify the command in a way that doesn't make sense to me.
I've seen a number of posts about subprocess relating to whether a shell is expected / being used, but I tried running the command with both shell=True and shell=False` and saw the same weird command modification.
Here is what I am running:
command = ['rsync', '-avP', f'{user}#{host}:/home/{user}/Workspace/{remote_dir}', os.getcwd()]) # attempting to sync from a dir on my remote machine to my current directory on my local machine
print('command: ' + ' '.join(command))
subprocess.call(command)
Here is the output (censored):
command: rsync -avP benjieg#[hostname]:/home/benjieg/Workspace/maige[dirname] /Users/benjieg/Workspace
building file list ...
rsync: link_stat "/Users/benjieg/Workspace/benjieg#[hostname]:/home/benjieg/Workspace/[dirname]" failed: No such file or directory (2)
8383 files to consider
sent 181429 bytes received 20 bytes 362898.00 bytes/sec
total size is 217164673 speedup is 1196.84
rsync error: some files could not be transferred (code 23) at /AppleInternal/Library/BuildRoots/810eba08-405a-11ed-86e9-6af958a02716/Library/Caches/com.apple.xbs/Sources/rsync/rsync/main.c(996) [sender=2.6.9]
If I run the command directly in my terminal, it works just fine. The mysterious thing that i'm seeing is that subprocess appears to append some extra path before my user name in the command. You can see this in the 'link stat' line in the output. I'm guessing this is why it's unable to find the directory. But I really can't figure out why it would do that.
Further, I tried moving out of the 'Workspace' directory on my local machine to home, and even more weird behavior ensued. It seemed to begin looking into all of the files on my local machine, even prompting dropbox to ask me if iterm2 had permission to access it. Why would this happen? Why would whatever subprocess is doing cause a search of my whole local file system? I'm really mystified by this seemingly random behavior.
I'm on MacOS 13.0.1 (22A400) using python 3.10.4
I have a CodeDeploy which deploys application on Windows instances. I have a Python script which is running as part of ValidateService hooks. Below is the code I have in that script:
print("hello")
So, I have removed everything and just printing hello as part of this script. When this script is called by CodeDeploy I get below error:
My appspec.yml file:
...
ValidateService:
- location: scripts/verify_deployment.py
timeout: 900
I tried getting some help on Google but got nothing. Can someone please help me here.
Thanks
As Marcin already answered in a comment, I don't think you can simply run python scripts in CodeDeploy. At least not natively.
The error you see means that Windows does not know how to execute the script you have provided. AFAIK Windows can't run python natively (like most linux distros can).
I am not very accustomed to CodeDeploy, but given the example at https://github.com/aws-samples/aws-codedeploy-samples/tree/master/applications/SampleApp_Windows, I think you have to install python first.
After so much of investigations, I found my answer. The issue is little misleading, there is nothing to do with Code format or ENOEXEC. The issue was due to Python path. While executing my script, CodeDeploy was unable to find Python (Though I had already added python.exe in Environment variable path).
Also, I found that CodeDeploy is unable to execute .py file due to Python path issue. So, I created a PowerShell script and invoking Python script from there. Like below:
C:\Users\<username>\AppData\Local\Programs\Python\Python37-32\python.exe C:\Users\<username>\Documents\verify_deployment.py
It executed Python script successfully and gave me below output:
hello
Currently I am facing a very confusing issue.
I am executing a job from (Rundeck) to (remote windows machine) using winrm as executor and file copier, which execute an inline powershell script.
Tried and worked fine on one of 3 environments.
On the Preprod, and Prod Rundeck.. Same job (exported/imported), fails, though same setting on the 3 environment, same script, same args, even same windows version.
I added a WINRM Check connection step, and it succeeds.
Rundeck manage to throw the script on the machine (with wrong name however), which means authentication is going well. However, it fails with this abstract error.
[ERROR ] Execution finished with the following error (winrm-exec.py:304)[root]
[ERROR ] The parameter is incorrect. (extended fault data: {u'fault_subcode': 'w:InvalidParameter', u'fault_code': 's:Sender', u'wsmanfault_code': '87', 'transport_message': u'Bad HTTP response returned from server. Code 500', 'http_status_code': 500}) (winrm-exec.py:305)[root]
[WinRMPython]: result code: 1, success: false
Failed: NonZeroResultCode: [WinRMPython] Result code: 1
When I try to execute the thrown Powershell script locally from the machine it works well.
WinRM plugin version: 2.0.9
Python: 2.7.17
Switching to python3 in WINRM resolved the problem. However, it caused problem for Windows server 2008 and older version.
So if you have both Windows OS version [pre and post 2008], you would need to split RUNDECK projects, and have winrm available on both python2 and python3.
For future references, the solution is here. Switching to Python 3 interpreter (instead of Python 2) on Default Node Executor and Default File Copier (Project Settings > Edit Configuration > Default Node Executor and Default File copier tabs) solves the issue.
I am currently working on a python program for finding flaky tests by running them multiple times. To achieve this goal, I'm executing the tests in a virtualenv in random order using pytest.
When I execute the program on a remote machine via slurm job, I get following error codes:
2019-11-26 18:18:18,642 - CRITICAL - Failed to configure container: [Errno 1] Creating overlay mount for '/' failed: Operation not permitted. Please use other directory modes, for example '--read-only-dir /'.
2019-11-26 18:18:18,777 - CRITICAL - Cannot execute 'pytest': execution in container failed.
This doesn't happen on my local machine, only on the task startet via the slurm job.
This is my first time working with python at this complexity, so I'm not really sure where to start solving the problem.
Thanks a lot in advance!
I finally figured out that the problem only occurs on the newest version of benchexec.
When my python program executes run exec --no-container --pytest inside a virtualenv and benchexec is version 2.0 or higher, the error message displayed in my original post shows up. I simply tell pip to install an older version of benchexec in my virtualenv and voila it works.
I would've created the tag benchexec but don't have the necessary credit points for doing so. Feel free to do it for me!
I'm working on a Linux machine running Ubuntu Bionic Beaver, release 18.04.
The other day I mistakenly changed the /usr/ directory to be owned by a user, instead of root. Unfortunately, I did that recursively, and so messed quite a bit of the system up because it also changed the suid permissions on some of the commands (e.g. passwd, sudo). We really can't reinstall (well we can but it'll cost!), so I booted from a LiveUSB, and changed manually all the correct user/group/permissions for each file that I could identify had a non-Root:Root User:Group. I did this by comparing the output of another Ubuntu computer of ls -lha /usr/.
It seems to be mostly fixed, but now I'm running into the error 'std::bad_alloc' after running some pretty standard python scripts. The strange part about this is that it only comes up sometimes. For example, if I open python from the command line and copy and paste code, the code will all run fine with no error. However if I run the entire script from the command line (e.g. python script.py) then I get this error. The full error message is:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
But to add another twist - sometimes I can run the same python script from the command line with no problem, and others I get this error as above.
If anybody has ideas as to where to specifically look to fix this that'd be great! I'm going to try and do the same thing as before but with the ls -lha /usr/ output from an 18.04 release, as I only had a 16 release output on hand.