I have a src directory which contains a pyproject.toml, right next to my setup.py (which I can find, for a fact).
But when I run the task
- bash: pytest -c src/pyproject.toml
displayName: Run tests
workingDirectory: $(Pipeline.Workspace)
I get FileNotFoundError: [Errno 2] No such file or directory: '/home/vsts/work/1/src/pyproject.toml'.
Everything works fine without trying to point to this file from pytest. Why? The same setup works fine locally.
The workingDirectory should be $(System.DefaultWorkingDirectory), not $(Pipeline.Workspace).
The $(Pipeline.Workspace) is the local path on the agent where all folders for a given build pipeline are created.
The $(System.DefaultWorkingDirectory) is the local path on the agent where your source code files are downloaded.
Click this document for detailed information about predefined variables in Azure DevOps
Related
Whenever I use snakemake with singularity and would like create a directory or file, I run into read-only errors if I try to write outside of the snakemake directory.
For example, if my rule contains the following:
container:
docker://some_image
shell:
"touch ~/hello.txt"
then everything runs fine. hello.txt is created inside the snakemake directory. However if my touch command tries to create a file outside of the snakemake directory:
container:
docker://some_image
shell:
"touch /home/user/hello.txt"
Then I get the following error:
touch: cannot touch '/home/user/hello.txt': Read-only file system
Is there anyway to give snakemake the ability to create files anywhere it wants when using singularity?
Singularity by default mounts certain directories including user's home directory. In your first command (touch ~/hello.txt), file gets written to home directory, where singularity has read/write permissions. However in your second command (touch /home/user/hello.txt), singularity doesn't have read/write access to /home/user/; you will need to bind that path manually using singularity's --bind and supply it to snakemake via --singularity-args.
So the snakemake command would look something like
snakemake --use-singularity --singularity-args "--bind /home/user/"
I use subprocess.popen in python3 to put files or make directories in hdfs. it runs accurately using python3 on the Linux shell. but as I use crontab to run the code, I get "FileNotFoundError: [Errno 2] No such file or directory: 'hdfs': 'hdfs'" Error on my log file.
make_dir = subprocess.Popen(['hdfs','dfs','-mkdir' , '-p' , hdfs_path])
I had similar problem when my script executed by Airflow in the same machine, Airflow did not load my environment variables. I had to manually set the environment variables in code. you can find how to do that in code here. just update PATH to include Hadoop binaries path. also it is suggested that you can load environment variables in cron job script. like:
#!/bin/bash
source $HOME/.bash_profile
some_other_cmd
more details
I have a python script that I am trying to run as part of gitlab pages deployment of a jekyll site. My site has blog posts that have various tags, and the python script generates the .md files for the tag pages. The script works perfectly fine when I just manually run it in an IDE, however I want it to be part of the gitlab ci deployment process
here is what my gitlab-ci.yml setup looks like:
run:
image: python:latest
script:
- python tag_generator.py
artifacts:
paths:
- public
only:
- master
pages:
image: ruby:2.3
stage: deploy
script:
- bundle install
- bundle exec jekyll build -d public
artifacts:
paths:
- public
only:
- master
however, it doesn't actually create the files that it's supposed to create, here is the output from the job "run":
...
Cloning repository...
Cloning into '/builds/username/projectname'...
Checking out 4c8a47fe as master...
Skipping Git submodules setup
$ python tag_generator.py
Tags generated, count 23
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
Job succeeded
the script reads out "tags generated, count ___" once it's executed, so it is running, however the files that it's supposed to create aren't being created/uploaded into the right directory. there is a /tag directory in the root project folder, that is where they are supposed to go.
I realize that the issue must have something to do with the public folder, however when I don't have
artifacts:
paths:
- public
it still doesn't create the files in the /tag directory, so it doesn't work whether I have -public or not, and I don't know what the problem is.
I FIGURED IT OUT!
the "build" for the project isn't made in the repo, gitlab clones the repo into another place, so I had to change the artifact path for the python job so that it's in the cloned "build" location, like so:
run:
image: python:latest
stage: test
before_script:
- python -V # Print out python version for debugging
- pip install virtualenv
script:
- python tag_generator.py
artifacts:
paths:
- /builds/username/projectname/tag
only:
- master
I'm trying to create a new profile for snakemake to easy run workflows on our cluster system. I followed the examples available in the Snakemake-Profiles repo, and have the following directory structure:
project/
- my_cluster/
-- config.yaml
-- cluster-submit.py
-- jobscript.sh
- Snakefile
- (other files)
Thus, the my_cluster directory contains the cluster profile. Excerpt of config.yaml:
cluster: "cluster-submit.py {dependencies}"
jobscript: "jobscript.sh"
jobs: 1000
immediate-submit: true
notemp: true
If I try to run my workflow as follows:
snakemake --profile my_cluster target
I get the following error: sh: command not found: cluster-submit.py, for each rule snakemake tries to submit. Which may not be super surprising considering the cluster-submit.py script lives in a different directory. I'd have thought that snakemake would handle the paths when using profiles. So am I forgetting something? Did I misconfigure something? The cluster-submit.py file has executable permissions and when using absolute paths in my config.yaml this works fine.
I have a Python project that uses tox. Some unit tests require sudo, so .travis.yml has
script:
- sudo tox
However, this leaves the egg-info file and others owned by root. So when Travis runs the deploy step (as user), it gives the following output:
Deploying application
running sdist
running egg_info
writing requirements to myproject.egg-info/requires.txt
error: [Errno 13] Permission denied: 'myproject.egg-info/requires.txt'
ValueError: Cannot find file (or expand pattern): 'dist/*'
How can I run the deploy step as root, or otherwise get around this issue?
Not sure of whether some smartness can be applied with tox itself, but you could start your deploy stage with a script along the following lines:
- sudo chown --changes --recursive $(whoami):$(id --group $(whoami)) .
This sets all the files in the current directory to the current user, and the main group of the current user.