I need to run a different compiler dependent on which version of an app the file being edited is destined to be used with.
The source file is always stored in a path that contains the version number.
So file
%APPDATA%\App\9\Settings\Source.file
Would need to run
%ProgramFiles%\App\9\Compiler.exe "%APPDATA%\App\9\Settings\Source.file"
and
%APPDATA%\App\11\Settings\Source.file
Would need to run
%ProgramFiles%\App\11\Compiler.exe "%APPDATA%\App\11\Settings\Source.file"
I have tried following the advanced example here:
https://www.sublimetext.com/docs/3/build_systems.html#advanced_example
but python really isnt my thing and I can't seem to get anything to run
Basic works but I cant specify the version:
{
"cmd": ["c:\\Program Files\\App\\10\\compile.exe", "$file"],
"selector": "source.app",
"file_patterns": "*.ext"
}
But this doesnt:
{
"target": "app_build",
"selector": "source.app",
"file_patterns": "*.ext"
}
.py file
import sublime
import sublime_plugin
class AppBuildCommand(sublime_plugin.WindowCommand):
def run (self):
vars = self.window.extract_variables()
compiler = vars['file_path']
compiler = compiler.split("\\")
compiler_path = "c:\\Program Files\\App\\" + compiler[compiler.index("App")+1] + "\\Compiler.exe"
file = vars['file']
self.window.run_command (compiler + " \"" + file + "\"")
Also tried with no success:
args = []
args.append(compiler)
args.append(file)
self.window.run_command("cmd", args)
The run_command() method is for executing internal Sublime commands (i.e. the things you would bind to a key or trigger from a menu), so part of the reason why this isn't working for you is that you're trying to get Sublime to run a command instead of having it execute an external program.
A build system is normally executed by the exec command; this is essentially the default value of the target key if you don't provide it in your sublime-build file. exec is responsible for using the arguments given to it by Sublime to start up an external process, capture it's output and display it in the output panel at the bottom of the window.
In order to customize what gets executed, you do need to implement you own WindowCommand and use the target key to tell Sublime to run it, but that command is then responsible for doing what exec would do, which includes starting an external process and capturing the output.
The example that's listed in the documentation uses subprocess.Popen() to perform this task, along with having to track if the task is running to close it, etc.
An easy way to pull this off is to create your command as a subclass of the command that Sublime normally uses to run the build, so that you can customize how the build starts but let existing code take care of all of the details.
An example of such a command would be something like the following:
import sublime
import sublime_plugin
import os
from Default.exec import ExecCommand
class AppBuildCommand(ExecCommand):
def run(self, **kwargs):
# Get the list of variables known to build systems
variables = self.window.extract_variables()
# Is there a file path? There won't be if the file hasn't been saved
# yet.
if "file_path" in variables:
# Pull out the file path, split it into parts, and use the segment
# after the "App" segment as the value of a new variable named
# version.
file_path = variables["file_path"].upper().split(os.sep)
if "APP" in file_path:
variables["version"] = file_path[file_path.index("APP") + 1]
# Expand any remaining variables in our arguments and then execute the
# build.
kwargs = sublime.expand_variables(kwargs, variables)
super().run(**kwargs)
An example of a sublime-build file that uses this command for the build would be:
{
"target": "app_build",
"cancel": {"kill": true},
"cmd": ["c:\\Program Files\\App\\\\${version:100}\\compile.exe", "$file"],
"selector": "source.app",
"file_patterns": ["*.ext"]
}
The command itself examines the path of the current file for the segment that follows the App segment (here case insensitively just in case), and uses that to create a new build system variable named version. If the file hasn't been saved yet (and thus has no name on disk) or if there isn't a path segment named App in it, then the variable is not created.
The last two lines expand any variables that haven't been expanded yet, and then tell the exec command to execute the build.
The sublime-build file can be in every way a normal sublime-build file, but the changes you need to make are:
You need to use target to indicate that our command should be the one to execute
You should (but don't need to) set the cancel key to tell Sublime how to cancel your build if you choose the cancel build command; this one tells Sublime to execute the same app_build command that it used to start the build, but give it an extra argument to say the build should be terminated.
Anywhere you want to access the version number from the path of the file, use the notation \\${version:DEFAULT}.
The variable needs to be specified this way (i.e. with the two \\ characters in front) because Sublime will automatically try to expand any variables in the build system keys that exec recognizes before it calls the command.
Since at the time the command gets called we haven't set the value of the version variable yet, this will make Sublime assume that the value is the empty string, which removes it from the string and stops us from being able to detect that it's there.
In order to get Sublime to leave the variable alone, you need to use the \$ notation as an indication that this should not be treated as a special $ character; Sublime will convert \$ to $ and pass it through to our command. In the sublime-build file you need to use \\$ because \$ is not a valid JSON character escape.
The :DEFAULT portion is optional; this is what will be used as the expanded text if the version variable isn't set. In the example above it's set to 100, but in practice you'd set it to something like 9 or 11 instead. The variable won't be set if the command couldn't figure out the version to use, so you can use this to set a default version to be used in that case (if this makes sense for your use case).
This video series on build systems in Sublime Text has more information in general on how build sytems work, which includes several videos on custom build targets and how they work, including more information on advanced custom targets that subclass the ExecCommand as we're doing here (disclaimer: I am the author).
Related
Right up front to be clear, I am not fluent in programming or python, but generally can accomplish what I need to with some research. Please excuse any bad formatting structure, as this is my first post to a board like this
I recently updated my laptop from Ubuntu 18.04 to 20.04. I created a full system backup with Dejadup, which due to a missing file, could not be restored. Research brought me to post on here from 2019 for manually restoring these files. The process called for 2 scripts, 1 to unpack and the second to reconstruct the files, both created by Hamish Downer.
The first,
"for f in duplicity-full.*.difftar.gz; do echo "$f"; tar xf "$f"; done"
seemed to work well and did unpack the files.
The second,
#!/usr/bin/env python3
import argparse
from pathlib import Path
import shutil
import sys"
is the start of a re-constructor script. Using terminal from within the directory I am trying to rebuild I enter the first line and return.
When I enter the second line of code the terminal just "hangs" with no activity, and will only come back to the prompt if I double click the cursor. I receive no errors or warnings. When I enter the third line of code
"from pathlib import Path"
and return I then get an error
from: can't read /var/mail/pathlib
The problem seems to originate with the "import argparse" command and I assume is due to a symlink.
argparse is located in /usr/local/lib/python3.8/dist-packages (1.4.0)
python3 is located in /usr/bin/
Python came with the Ubuntu 20.04 distribution package.
Any help with reconstructing these files would be greatly appreciated, especially in a batch as this script is meant to do versus trying to do them one file at a time.
Update: I have tried adding the "re-constructor" part of this script without success. This is a link to the script I want to use:
https://askubuntu.com/questions/1123058/extract-unencrypted-duplicity-backup-when-all-sigtar-and-most-manifest-files-are
Re-constructor script:
class FileReconstructor():
def __init__(self, unpacked_dir, restore_dir):
self.unpacked_path = Path(unpacked_dir).resolve()
self.restore_path = Path(restore_dir).resolve()
def reconstruct_files(self):
for leaf_dir in self.walk_unpacked_leaf_dirs():
target_path = self.target_path(leaf_dir)
target_path.parent.mkdir(parents=True, exist_ok=True)
with target_path.open('wb') as target_file:
self.copy_file_parts_to(target_file, leaf_dir)
def copy_file_parts_to(self, target_file, leaf_dir):
file_parts = sorted(leaf_dir.iterdir(), key=lambda x: int(x.name))
for file_part in file_parts:
with file_part.open('rb') as source_file:
shutil.copyfileobj(source_file, target_file)
def walk_unpacked_leaf_dirs(self):
"""
based on the assumption that all leaf files are named as numbers
"""
seen_dirs = set()
for path in self.unpacked_path.rglob('*'):
if path.is_file():
if path.parent not in seen_dirs:
seen_dirs.add(path.parent)
yield path.parent
def target_path(self, leaf_dir_path):
return self.restore_path / leaf_dir_path.relative_to(self.unpacked_path)
def parse_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument(
'unpacked_dir',
help='The directory with the unpacked tar files',
)
parser.add_argument(
'restore_dir',
help='The directory to restore files into',
)
return parser.parse_args(argv)
def main(argv):
args = parse_args(argv)
reconstuctor = FileReconstructor(args.media/jerry/ubuntu, args.media/jerry/Restored)
return reconstuctor.reconstruct_files()
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
I think you are typing the commands to the shell instead of python interpreter. Please check your prompt, python (started with python3) has >>>.
Linux has an import command (part of the ImageMagick) and understands import argparse, but it does something completely different.
import - saves any visible window on an X server and outputs it as an
image file. You can capture a single window, the entire screen, or any
rectangular portion of the screen.
This matches the described behaviour. import waits for a mouse click and then creates a large output file. Check if there is a new file named argparse.
An executable script contains instruction to be processed by an interpreter and there are many possible interpreters, several shells (bash and alternatives), languages like Perl, Python, etc. and also some very specialized like nft for firewall rules.
If you execute a script from the command line, the shell reads its first line. If it starts with #! characters (called "shebang"), it uses the program listed on that line. (note: /usr/bin/env there is just a helper to find the exact location of a program).
But if you want to use an interpreter interactively, you need to start it explicitly. The shebang line has no special meaning in this situation, only as the very first line of a script. Otherwise it is just a comment and is ignored.
I have file called . /home/test.sh (the space between the first . and / is intentional) which contains some environmental variables. I need to load this file and run the .py. If I run the command manually first on the Linux server and then run python script it generates the required output. However, I want to call . /home/test.sh from within python to load the profile and run rest of the code. If this profile is not loaded python scripts runs and gives 0 as an output.
The call
subprocess.call('. /home/test.sh',shell=True)
runs fine but the profile is not loaded on the Linux terminal to execute python code and give the desired output.
Can someone help?
Environment variables are not inherited directly by the parent process, which is why your simple approach does not work.
If you are trying to pick up environment variables that have been set in your test.sh, then one thing you could do instead is to use env in a sub-shell to write them to stdout after sourcing the script, and then in Python you can parse these and set them locally.
The code below will work provided that test.sh does not write any output itself. (If it does, then what you could do to work around it would be to echo some separator string afterward sourcing it, and before running the env, and then in the Python code, strip off the separator string and everything before it.)
import subprocess
import os
p = subprocess.Popen(". /home/test.sh; env -0", shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, _ = p.communicate()
for varspec in out.decode().split("\x00")[:-1]:
pos = varspec.index("=")
name = varspec[:pos]
value = varspec[pos + 1:]
os.environ[name] = value
# just to test whether it works - output of the following should include
# the variables that were set
os.system("env")
It is also worth considering that if all that you want to do is set some environment variables every time before you run any python code, then one option is just to source your test.sh from a shell-script wrapper, and not try to set them inside python at all:
#!/bin/sh
. /home/test.sh
exec "/path/to/your/python/script $#"
Then when you want to run the Python code, you run the wrapper instead.
Some git commands, git commit for example, invoke a command-line based text editor (such as vim or nano, or other) pre-filled with some values and, after the user saves and exists, do something with the saved file.
How should I proceed to add this functionality to a Python similar command-line program, at Linux?
Please don't stop yourself for giving an answer if it does not use Python, I will be pretty satisfied with a generic abstract answer, or an answer as code in another language.
The solution will depend on what editor you have, which environment variable the editor might possibly be found in and if the editor takes any command line parameters.
This is a simple solution that works on windows without any environment variables or command line arguments to the editor. Modify as is needed.
import subprocess
import os.path
def start_editor(editor,file_name):
if not os.path.isfile(file_name): # If file doesn't exist, create it
with open(file_name,'w'):
pass
command_line=editor+' '+file_name # Add any desired command line args
p = subprocess.Popen(command_line)
p.wait()
file_name='test.txt' # Probably known from elsewhere
editor='notepad.exe' # Read from environment variable if desired
start_editor(editor,file_name)
with open(file_name,'r') as f: # Do something with the file, just an example here
for line in f:
print line
Ok, this thing is driving me crazy right now. So Action 1 Chooses a Folder (I want to save that folder's path as var_1) and Action 3 Selects a File (I want to save this file's path as var_2)
so in the end . . .
var_1 = '/Users/Prometheus/Desktop/'
var_2 = '/Users/Prometheus/Documents/a.txt'
So how do I use these variables and their values inside of Shell Script with python ? I can't use sys.argv because they are set to some weird variables
I usually put 'Ask for Finder Item' > Run Shell Script and then
import sys
variable = open(argv[1]).read()
but i can't use that in this case . my scripts are in python so i'd rather stay in python because i don't know any other language
The Automator variables are only used in the Automator workflow. The variable themselves are not directly accessible to either a shell script or a Python script. The Run Shell Script action allows you to pass the values of particular variables to a shell script in either of two ways: either piping them in through stdin or by passing them as execution arguments. For this sort of use case, the latter is easier. To start with, you need to pick Automator variable names in the Set Value of Variable and Get Value of Variable actions so the values selected can be retained between actions. Here's a very rudimentary workflow example where I've selected two folders:
You might use a Run AppleScript action like this to display the dialogs:
POSIX path of (choose folder default location (path to desktop))
result & linefeed & POSIX path of (choose file default location (path to desktop))
Then set "Pass input" to "to stdin" in the Run Shell Script action and use a script like this:
import sys
folder, file = sys.stdin.read().splitlines()
Running !import code; code.interact(local=vars()) inside the pdb prompt allows you to input multiline statements(e.g. a class definition) inside the debugger(source). Is there any way to omit having to copy paste/type that full line each time?
I was thinking about Conque for vim and setting something like :noremap ,d i!import code; code.interact(local=vars())<Esc> but editing anything outside of insert mode doesn't seem to have any effect on the prompt.
PDB reads in a .pdbrc when it starts. From the Python docs:
If a file .pdbrc exists in the user’s home directory or in the current directory, it is read in and executed as if it had been typed at the debugger prompt. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file.
So try creating that file and putting that command in there as is.