A build system on a selected text - python

I have the build system for Postgres:
{
"cmd": ["psql", "-U", "postgres", "-d", "test", "-o", "c:/app/sql/result.txt", "-f", "$file"]
}
It works fine, executes current file and sends results to the file c:/app/sql/result.txt.
I want to modify it to automatically save current selection to a file and run psql on that file. Can it be done in a build system?

As my learning has borne fruit let me answer my own question. The simple plugin saves a selected text in a file and calls a build system:
import sublime, sublime_plugin
class ExecuteSelectedSqlCommand(sublime_plugin.TextCommand):
def run(self, edit):
for region in self.view.sel():
if not region.empty():
with open('/a/temporary/file', 'w') as f:
f.write(self.view.substr(region))
self.view.window().run_command('build')
break

Related

how to run an executable with python json

so I have this executable binary file that can be run via terminal
and by python
using code
$`python3 shell_c.py`
where python file contains
import subprocess
def executable_shell():
x=subprocess.run('cd build && ./COSMO/methane_c0.outmol.cosmo', shell=True, capture_output=True)
print(x)
executable_shell()
where COSMO is my executable name and "methane_c0.outmol" is the dynamic value that should be changed along with ".cosmo" which is an extension)
so to get these values from the JSON file I have created a JSON file
with input
{
"root_directory": "C:\\Users\\15182\\cosmo theory\\COSMO\\UDbase8",
"file_name": "methane_c0",
"file_format": ".cosmo",
"output1": "N_atoms",
"output2": "total number of segments"
}
now all that is left is to pass the value of file_name and file_format to the subprocess code to run it.
but I am not getting how to do go about it.
code I have written so far is basic
import json
with open ("parameters.json") as file:
data =json.load(file)
print(type(data))
pront(data)
how should I go so that values can be passed to a python file?
Something like this?
import json
import subprocess
with open ("parameters.json") as file:
data =json.load(file)
dynamic_file_name = data['file_name']+'.outmol'+data['file_format']
def executable_shell():
x=subprocess.run('cd build && ./COSMO/'+dynamic_file_name, shell=True, capture_output=True)
print(x)
executable_shell()

How to pass pipeline credential parameter to python script as env variable

I have a pipeline job with a credential parameter (user name and password), and also a groovy file that runs a shell script which triggers python file.
How can I pass those parameters to the env so the python script can use them with os.getenv?
Groovy file code:
def call() {
final fileContent = libraryResource('com/amdocs/python_distribution_util/main.py')
writeFile file: 'main.py', text: fileContent
sh "python main.py"}
I know the pipeline-syntax should look something similar to that:
withCredentials([usernamePassword(credentialsId: '*****', passwordVariable: 'ARTIFACTORY_SERVICE_ID_PW', usernameVariable: 'ARTIFACTORY_SERVICE_ID_UN')]) {
// some block
}
what is the correct way for doing it?
Correct syntax:
def call() {
def DISTRIBUTION_CREDENTIAL_ID = params.DISTRIBUTION_CREDENTIAL_ID
withCredentials([usernamePassword(credentialsId: '${DISTRIBUTION_CREDENTIAL_ID}', passwordVariable: 'ARTIFACTORY_SERVICE_ID_PW', usernameVariable: 'ARTIFACTORY_SERVICE_ID_UN')]) {
sh '''
export ARTIFACTORY_SERVICE_ID_UN=${ARTIFACTORY_SERVICE_ID_UN}
export ARTIFACTORY_SERVICE_ID_PW=${ARTIFACTORY_SERVICE_ID_PW}
}
}
and then you can use config.py file to pull the values using:
import os
ARTIFACTORY_SERVICE_ID_UN = os.getenv('ARTIFACTORY_SERVICE_ID_UN')
ARTIFACTORY_SERVICE_ID_PW = os.getenv('ARTIFACTORY_SERVICE_ID_PW')

Start a python script from grails 3

I created a web interface in Grails 3 where you can start different pipelines written in python via a web environment. I have created a simple form with a start button. The idea is now that when you press the start button the python pipeline is started. I can't figure it out I have tried several things a example is:
def cmd = "python amplicon_pipeline.py -i 'inputdir' -o 'outputdir' -a 'amplicon'"
def proc = cmd.execute()
proc.waitFor()
But nothing happens.
How can I get an external python script to start working when you press the start button?
Mention the full path to python and the script file.
Ex:
def cmd = ["/usr/bin/python", "/home/rm93/Documents/project_rivm/RIVM_amplicon_pipeline/amp‌​licon_pipeline.py", "-i", "/home/rm93/Documents/Git/BIGC_test_upload/amplicon_pipeline‌​/18/upload/", "-o", "/home/rm93/Documents/Git/BIGC_test_upload/amplicon_pipeline‌​/18/output/", "-a", "16sv4"]
def proc = cmd.execute()
proc.waitFor()
println proc.text

Python : Evaluating string of python code

I am trying to create a Python script generation tool that creates Python files based on certain actions. For example:
action = "usb" # dynamic input to script
dev_type = "3.0" # dynamic input to script
from configuration import config
if action == "usb":
script = """
#code to do a certain usb functionality
dev_type = """ + dev_type + """
if dev_type == "2.0"
command = config.read("USB command 2.0 ")
proc = subprocess.Popen(command,stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
elif dev_type == "3.0":
command = config.read("USB command 3.0 ")
proc = subprocess.Popen(command,stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
"""
elif action == "lan":
script = """
#code to run a certain lan functionality for eg:
command = config.read("lan command")
proc = subprocess.Popen(command,stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
"""
with open("outputfile.py", w) as fl:
fl.writelines(script)
The actual script generator is much complex.
I generate the "outputfile.py" script in my local machine inputting the action to the script and then deploy the script generated to remote machines to record the results. This is part of a larger framework and I need to keep this execution format.
Each "script" block uses a config.read function to read certain variables required for it to run from a config file. "command" in the above example.
My actual frame work has some 800 configurations in the config file - which I deploy on the remote machines along with the script to run. So there is a lot of extra configurations in that file that may not be required for a particular script to run and is not very user friendly.
What I am looking for is based on the "script" block that get written to the output file - create a custom config file that contains only the config that is required for that script to run.
For example if action is "lan", the below script block get written to the output file
script = """
#code to run a certain lan functionality for eg:
command = config.read("lan command")
proc = subprocess.Popen(command,stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
"""
What I want is only the "lan command" in my custom config file.
My question is :
How do I evaluate the "script" block (inside triple quotes) to know which config are been used in that code and then to write that config to my custom config file when I write to output file ?
There could be multiple if conditions inside the "script" block also. I do not want config for "USB command 2.0" and "USB command 3.0" in the custom config file if the action = "usb" and dev_type = "3.0"
Of course I cannot execute the code inside the "script" block - and then intercept config.read() function to write the config that got called to my custom config file. Is there a better way to do it ?

mysql source command do nothing inside docker container

Description
I'm running docker container with mysql in it and I want to run python script after mysql started, which will apply dump on it.
Here is a snippet of Dockerfile:
FROM mysql:5.6
RUN apt-get update && \
apt-get install -y python
ADD apply_dump.py /usr/local/bin/apply_dump.py
ADD starter.sh /usr/local/bin/starter.sh
CMD ["/usr/local/bin/starter.sh"]
starter.sh:
nohup python '/usr/local/bin/apply_dump.py' &
mysqld
apply_dump.py:
import os
import urllib
import gzip
import shutil
import subprocess
import time
import logging
import sys
# wait for mysql server
time.sleep(5)
print "Start dumping"
dumpName = "ggg.sql"
dumpGzFile = dumpName + ".gz"
dumpSqlFile = dumpName + ".sql"
print "Loading dump {}...".format(dumpGzFile)
urllib.urlretrieve('ftp://ftpUser:ftpPassword#ftpHost/' + dumpGzFile, '/tmp/' + dumpGzFile)
print "Extracting dump..."
with gzip.open('/tmp/' + dumpGzFile, 'rb') as f_in:
with open('/tmp/' + dumpSqlFile, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
print "Dropping database..."
subprocess.call(["mysql", "-u", "root", "-proot", "-e", "drop database if exists test_db"])
print "Creating database..."
subprocess.call(["mysql", "-u", "root", "-proot", "-e", "create schema test_db"])
print "Applying dump..."
subprocess.call(["mysql", "--user=root", "--password=root", "test_db", "-e" "source /tmp/{}".format(dumpSqlFile)])
print "Done"
content of ggg.sql.gz is pretty simple:
CREATE TABLE test_table (id INT NOT NULL,PRIMARY KEY (id));
Problem
Database created, but table is not. If I'll go to container and will run this script manually, table will be created. If I'll replace source command with direct sql create statement that will work as well. But in reality dump file will be pretty big and only source command will cope with this (or not only it?). Am I doing something wrong?
Thanks in advance.
Try passing your source SQL file into the MySQL command like this, instead of using the -e flag:
subprocess.call(["mysql", "--user=root", "--password=root", "test_db", "<", "/tmp/%s" % dumpSqlFile])
This will call import your SQL file using the widely used syntax:
mysql --user=root --password=root test_db < /tmp/source.sql

Categories

Resources