Is there an easy way to edit our python files in the Jenkins workspace UI?
It would be super nice if we could get code highlighting too!
There is a jenkins plugin that allows you to edit files: Config File Provider
It cant edit random file but you can use it to achieve what you want.
The storage of the plugin is in the form of xml files in jenkins folder. This means that you could create script that recreates those files wherever you need them by parsing those xml files (plugin does this for the workspace although it requires build setp). For instance, i could add new custom config file like this:
Name: script.sh
Comment: /var/log
Content: ....
This will be available then in xml file which you could parse within cron job to create actual files where you need them
The closest I can think of that Jenkins offers is a file upload. You can upload file with local changes and then trigger a build. This file will be replaced at already specified location. This feature can be used by making your build parameterized and adding File Parameter option. Below is what Jenkins says about the description of this feature.
Accepts a file submission from a browser as a build parameter. The uploaded file will be placed at the specified location in the workspace, which your build can then access and use.
This is useful for many situations, such as:
Letting people run tests on the artifacts they built.
Automating the upload/release/deployment process by allowing the user to place the file.
Perform data processing by uploading a dataset.
It is possible to not submit any file. If it's case and if no file is already present at the specified location in the workspace, then nothing happens. If there's already a file present in the workspace, then this file will be kept as-is.
Related
I've seen plenty of questions and docs about how to download artifacts generated in a workflow to pass between jobs. However, I've only found one thread about persisting downloaded files between steps of the same job, and am hoping someone can help clarify how this should work, as the answer on that thread doesn't make sense to me.
I'm building a workflow that navigates a site using Selenium and exports data manually (sadly there is no API). When running this locally, I am able to navigate the site just fine and click a button that downloads a CSV. I can then re-import that CSV for further processing (ultimately, it's getting cleaned and sent to Redshift). However, when I run this in GitHub Actions, I am unclear where the file is downloaded to, and am therefore unable to re-import it. Some things I've tried:
Echoing the working directory when the workflow runs, and setting up my pandas.read_csv() call to import the file from that directory.
Downloading the file and then echoing os.listdir() to print the contents of the working directory. When I do this, the CSV file is not listed, which makes me believe it was not saved to the working directory as expected. (which would explain why #1 doesn't work)
FWIW, the website in question does not give me the option to choose where the file downloads. When run locally, I hit the button on the site, and it automatically exports a CSV to my Downloads folder. So I'm at the mercy of wherever GitHub decides to save the file.
Last, because I feel like someone will suggest this - it is not an option for me to use read_html() to scrape the file from the page's HTML.
Thanks in advance!
I have been experimenting with the Python Cloud Functions. One of my cloud functions utlizies a large text file, that I would love to bundle with my .py file when I deploy it. THe docs are kinda limited on stuff like this..
https://cloud.google.com/functions/docs/quickstart
I was looking, would I just include that file, and a requirements file in my same directory as my function to deploy it. Or do I have to some how require it in my code?
Also, is there any information on how to use a database trigger instead of a http trigger? I was trying to think if the reason my file didn't seem to get included was because I had the wrong way of defining the trigger. How would you create an OnCreate.. or something like that
gcloud beta functions deploy hello_get --runtime python37 --trigger-http
When you deploy your function using gcloud, your project directory is zipped and uploaded. Any files contained in the current directory (and child directories) will be uploaded as well, which includes static assets like your large text file.
Just make sure your text file is in your project directory (or one of the child directories), then use a relative reference to your file in your Python code. It should just work.
I have a Django app deployed on a Azure Web App, and I want to dynamically create webjobs. More precisely, when I save a Django model called Importer, I want to be able to create a new web job, made of 3 files:
run.py : the Python webjob
settings.job : the cron schedule
config.ini : a config file for the webjob
The content of the "settings.job" & "config.ini" comes from an admin form and is specific to the webjob.
When I save my model, the code creates a new directory in
App_Data\jobs\triggered{my job name}
, and copies there the "run.py" from my app directory.
This works. However, when I try to create a new text file called "settings.job" in the job directory and write the cron schedule in it, I got a server error.
I tried many things, but the following basic test causes a failure:
file = open('test.txt','w')
file.write('this is a test')
file.close()
It seems that I don't have the right to write a file to the disk. How can that be solved?
Also, I want to be able to modify the content of the config and settings.job files when I update the corresponding Django model.
In addition, I tried to copy another file called "run2.py" to the webjob directory, and that fails too ! I cannot copy another file that run.py in that directory
According to your description, per my experience, I think the issue was caused by using the relative path in your code.
On Azure WebApps, we have the permission of doing any operations under the path D:/home.
My suggestion is using the absolute path instead of the relative path, such as D:/home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt or /home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt instead of test.txt. And please make sure the directory & parent directories had been made via use os.path.exists(path) to check the path exists and use os.makedirs to make them before writing a file.
Meanwhile, you also can try to use the WebJobs API to do some operations, such as creating a webjob via uploading a zip file or updating settings.job. For using these WebJobs APIs, be sure you had configured the Deployment credentials on Azure portal as the figure below, and add the basic auth Authorization: Basic <BASE64-Encode("deployment-username":"password")>in the request header.
Here is my analysis:
there is no problem copying other files than "run.py" to the webjob directory
the code crashes after (successfully) copying the file "run.py" from my Django app directory to the webjob directory. It does not matter if I use shutil.copy/shutil.copyfile or simply open("{path}/run.py","w"), the problem occurs when I try to write a file called "run.py" to the disk in the webjob directory.
I think that when we create a new webjob directory, if the system detects a file called "run.py" it tries to carry out some operations. Then there is a conflict with two processes trying to access the same file at the same time.
My solution: I copy the python script to the webjob directory with the name "myrun.txt", and then I rename it to run.py using os.rename
It works !
I crafted this little script which helps me upload files from terminal to Google Drive. I am going to use it with cron to make my life easier a bit.
This is the script I have: https://github.com/goranpejovic/drive-uploader/blob/master/upload.py
It essentially uploads file to GDrive and optionally converts it to Google Docs format. Now, I want to implement two more things but I am not sure how.
First is I want to have ability to upload to folders remotely. Not just create folder locally and upload it (this would be nice too now when I think of it), but upload file to remote folder in Google Drive. Is this possible? If so, any suggestion?
Second, I would like to be able to replace files that already exist on Drive. Obviously same filename doesn't matter to GDrive so I am assuming it has to do with metadata I am passing as "body". Or some other ways?
Thank you!
To create folders in drive is similar than creating a file, the difference would be the mime type: "application/vnd.google-apps.folder". So instead of uploading a folder, you have to use file.insert to create a new one in Drive.
Creating or inserting a file in Drive, as you mentioned, it will create a new file, even if the name already exists. In order to replace a file, you will have to update the existing file with the new information.
I'm using AWS for the first time and have just installed boto for python. I'm stuck at the step where it advices to:
"You can place this file either at /etc/boto.cfg for system-wide use or in the home directory of the user executing the commands as ~/.boto."
Honestly, I have no idea what to do. First, I can't find the boto.cfg and second I'm not sure which command to execute for the second option.
Also, when I deploy the application to my server, I'm assuming I need to do the same thing there too...
"You can place this file either at /etc/boto.cfg for system-wide use
or in the home directory of the user executing the commands as
~/.boto."
The former simply means that you might create a configuration file named boto.cfg within directory /etc (i.e. it won't necessarily be there already, depending on how boto has been installed on your particular system).
The latter is indeed phrased a bit unfortunate - ~/.boto means that boto will look for a configuration file named .boto within the home directory of the user executing the commands (i.e. Python scripts) which are facilitating the boto library.
You can read more about this in the boto wiki article BotoConfig, e.g. regarding the question at hand:
A boto config file is simply a .ini format configuration file that
specifies values for options that control the behavior of the boto
library. Upon startup, the boto library looks for configuration files
in the following locations and in the following order:
/etc/boto.cfg - for site-wide settings that all users on this machine
will use
~/.boto - for user-specific settings
You'll indeed need to prepare a respective configuration file on the server your application is deployed to as well.
Good luck!
For those who want to configure the credentials in Windows:
1-Create your file with the name you want(e.g boto_config.cfg) and place it in a location of your choice(e.g C:\Users\\configs).
2- Create an environment variable with the Name='BOTO_CONFIG' and Value= file_location/file_name
3- Boto is now ready to work with credentials automatically configured!
To create environment variables in Windows follow this tutorial: http://www.onlinehowto.net/Tutorials/Windows-7/Creating-System-Environment-Variables-in-Windows-7/1705
For anyone looking for information on the now-current boto3, it does not use a separate configuration file but rather respects the default one created by the aws cli when running aws configure (Ie, it will look at ~/.aws/config)