Hot to generate Fortify for file for python files.
A similar question is Fortify, how to start analysis through command but it lists the steps for java.
To generate reports for python project, --python-path has to be used.
I tried following steps, but did not work.
Step 1: Clean,build
sourceanalyzer -64 -Xms1024M -Xmx10000M -b -verbose -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
Step 2: Scan: This step should generate fpr file
sourceanalyzer -b 9999 -verbose -Xms1024M -Xmx10000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -python-path /path/to/python -f projec_999.fpr /local/proj/**/*.py
This did not generate any fpr file.
The second step gives the warning as:
[warning]: The -f option has no effect without the -scan option
[warning]: You may need to add some arguments to the -python-path argument to SCA.
I am not sure if I am using the correct command.
How to make sure that all python files are being scanned in the directory and subdirectories?
Is there any option to add multiple python paths?
The first step you did only does Clean, not the build step.
To perform the translation step for Python you need to specify the directories for the any Python references (-python-path) as well as the files to translate.
I am also not sure what you are doing with the ProjectRoot and WorkingDirectory, you know these are used to store temp data/intermediate files for sourceanalyzer and not the location of your source code, correct?
Something like
sourceanalyzer -b <buildId> -python-path <directories> <files to scan>'
<buildId> can be used to group different projects, you are somewhat doing this yourself when you do the ProjectRoot and WorkingDirectory (I am not sure if you need them both, can't remember and I no longer have access to test it out)
<directories> - this is where you can list out the directories that would normally be in your PythonPath environment variable (you might be able to actually call it here and save a lot of hassle). This is a comma-seperated list for Windows and a colon-seperated list for Linux
<files to scan> this is where you specify the files you want to translate/scan. You can specify individual files or use wildcard characters (* and **/* [recursive])
A sample command would look like:
sourceanalyzer -b MyApp -python-path %PYTHONPATH% ./MyApp/**/*
The other options you are putting in can be used and it would look something like this:
sourceanalyzer -b MyApp -Xms1024M -Xmx10G -logfile /local/proj/working/9999/working/sca.log -python-path %PYTHONPATH% ./MyApp/**/*
It is at this step you would check to see what files we translated from your program:
sourceanalyzer -b MyApp -show-files
Then you would perform the scan command
sourceanalyzer -b MyApp -logfile /local/proj/working/9999/working/sca.log -scan -f project.fpr
You may apply -python-path multiple times. This solves the problem which separator to use. The list of needed directories may be obtained with python:
import sys
print(sys.path)
Related
I'm trying to execute a command for each file in a directory but while using their absolute path (such as /home/richi/mydir/myfile.py) instead of their relative path (such as myfile.py).
In other words, I want to execute a command on files in a directory based on their absolute path - similar to for file in *.py; do thecommand -a "$file"; done but not quite.
I'm asking this because I'm trying to implement a Travis CI script running in an Ubuntu 14.04 environment which will install and use pyminifier to recursively minify all the Python code files in a directory.
Please note that I'm asking may be similar to this post but it's not.
Since you're on a standard Linux distro with a full userland, you can just use the realpath command:
Print the resolved absolute file name…
For example:
$ pwd
/home/abarnert/src/test
$ touch 1
$ realpath 1
/home/abarnert/src/test/1
That's it.
If you don't know how to use that from within bash, you can call a subcommand using $(…) syntax:
$ echo $(realpath 1)
/home/abarnert/src/test/1
Of course you want to pass it the value of the variable file, but that's just as easy:
$ file=1
$ echo $(realpath "$file")
/home/abarnert/src/test/1
I'm assuming you're using bash here. With a different sh-style shell, things will be different; with tcsh or zsh or fish or something, it may be even more different.
A really old userland, or a really stripped down one (e.g., for an embedded system) might not include realpath. In that case, you can use readlink, since the GNU version, as usually, adds everything including a couple kitchen sinks, and can be used as a realpath substitute.
Or, if worst comes to worst, Python has come with a realpath function since 2.2:
$(python -c 'import os,sys; print(os.path.realpath(sys.argv[1]))' "$file")
I have multiple directories of the form
foo/bar/baz/alpha_1/beta/gamma/files/uniqueFile1
foo/bar/baz/alpha_2/beta/gamma/files/uniqueFile2
foo/bar/baz/alpha_3/beta/gamma/files/uniqueFile3
What is the fastest way to merge these directories to a single directory structure like
foo/bar/baz/alpha/beta/gamma/files/uniqueFile1...uniqueFile3
I could write a python script to do that but is there a faster way to do that on a debian machine ? Can rsync help in this case ?
EDIT:
Apologies for not making it clear earlier, the depth in the examples is ~10-12 and I do not know the some directory names such as alpha*, these are randomly generated while throwing out logs. I was using find with wildcards to list these files earlier but now another level has been added in the path, that caused my find queries to take over a minute from 0.004s. So I am looking for a faster solution.
/known_fixed_path_5_levels/*/known_name*/*/fixed_path_2_levels/n_unique_files
has become
/known_fixed_path_5_levels/*/known_name*/*/xx*/fixed_path_2_levels/unique_file_1
/known_fixed_path_5_levels/*/known_name*/*/xx*/fixed_path_2_levels/unique_file_2
.
.
/known_fixed_path_5_levels/*/known_name*/*/xx*/fixed_path_2_levels/unique_file_n
I basically want to collect all those unique files into one place like how it was before.
With find:
mkdir --parents foo/bar/baz/alpha/beta/gamma/files; #create target directory if nessessary
find foo/bar/baz/alpha_[1-3]/beta/gamma/files -type f -exec cp {} foo/bar/baz/alpha/beta/gamma/files \;
As question is not clear about copying or moving, there is two ways, without copy! Even second part don't effectively copy your data!
Simple bash command
Simply:
cd foo/bar/baz
mv -it alpha/beta/gamma/files alpha_*/beta/gamma/files/uniqueFile*
with -i switch to prevent overwritting.
This will work perfectly for small bunch of files.
More robust and adaptive find syntax
Or by using find:
cd foo/bar/baz
find alpha_* -type f -mindepth 3 -exec mv -it alpha/beta/gamma/files {} +
Advantage of using find are
you could add a lot of flags like -name, -mtime and so on
find will never try to pass more files to command (mv) that command line could hold.
cp -al specific UN*X concept
Under Un*x, you could create hard-link wich is not symbolic links, but a secondary entry in folder tree, for the same inode.
Nota: As only one inode has to be referenced, this could work only on same filesystem.
By using
cp -ialt alpha/beta/gamma/files alpha_*/beta/gamma/files/uniqueFile*
You will copy in one directory all inodes references, but keeping only one file for each.
Using bash's globstar feature:
cd foo/bar/baz
shopt -s globstar
cp -alit alpha/beta/gamma/files alpha_*/**/uniqueFile*
Every 4 hours files are updated with new information if needed - i.e. if any new information has been processed for that particular file (files correspond to people).
I'm running this command to convert my .stp files (those being updated every 4 hours) to .xml files.
rule convert_waveform_stp:
input: '/data01/stpfiles/{file}.Stp'
output: '/data01/workspace/bm_data/xmlfiles/{file}.xml'
shell:
'''
mono /data01/workspace/bm_software/convert.exe {input} -o {output}
'''
My script is in Snakemake (python based) but I'm running the convert.exe through a shell command.
I'm getting an error on the ones already processed using convert.exe. They are saved by convert.exe as write-protected and there is no option to bypass this within the executable itself.
Error Message:
ProtectedOutputException in line 14 of /home/Snakefile:
Write-protected output files for rule convert_waveform_stp:
/data01/workspace/bm_data/xmlfiles/PID_1234567.xml
I'd still like them to be write-protected but would also like to be able to update them as needed.
Is there something I can add to my shell command to write over the write protected files?
take a look at the os standard library package:
https://docs.python.org/3.5/library/os.html?highlight=chmod#os.chmod
It allows for chmod with the following caveat:
Although Windows supports chmod(), you can only set the file’s read-only flag with it (via the stat.S_IWRITE and stat.S_IREAD constants or a corresponding integer value). All other bits are ignored.
#VickiT05, I thought you wanted it in python. Try this:
Check the original file permission with
ls -l [your file name]
stat -c %a [your file name]
Change the protection to with
chmod 777 [your file name]
change back to original file mode or whatever mode you want
chmod [original file protection mode] [your file name]
I want to run Python script with Jython.
the result show correctly, but at the same time there is an warning message, "sys-package-mgr*: can't create package cache dir"
How could I solve this problem?
thanks in advance~~~
You can change the location of the cache directory to a place that you have read & write access to by setting the "python.cachedir" option when starting jython, e.g.:
jython -Dpython.cachedir=*your cachedir directory here*
or:
java -jar my_standalone_jython.jar -Dpython.cachedir=*your cachedir directory here*
You can read about the python.cachedir option here:
http://www.jython.org/archive/21/docs/registry.html
1) By changing permissions to allow writing to the directory in the error message.
2) By setting python.cachedir.skip = true
You can read this:
http://www.jython.org/jythonbook/en/1.0/ModulesPackages.html#module-search-path-compilation-and-loading
for further insights.
Making directories world writable admittedly makes the problem "go away", however, it introduces a huge security hole. Anyone could introduce code to the now world writable directory that would be executed in the users' jpython environment.
Setting the cachedir to skip would presumably result in a performance drop (why implement a caching scheme other than to improve performace).
Instead I did the following:
I created a new group (in my case eclipse, but it could have been jpython). I added the users of jpython to that group.
$ sudo groupadd eclipse
I then changed the group of my eclipse plugins folder and its children to 'eclipse'.
/opt/eclipse/plugins $ sudo chgrp -R eclipse *
Then I changed the group permissions as follows
/opt/eclipse/plugins $ sudo chmod -R g+w *
/opt/eclipse/plugins $ find * -type d -print | sudo xargs chmod g+s
This added group writable, and set the S_GID bit on all directories recursively. This last bit causes new directories created to have the same group id as their parent.
The final touch was change the umask for the eclipse users set to 007.
$ sudo vi /etc/login.def
change UMASK to 007 (from 022).
UMASK=007
The easiest fix I found so far was to do:
$ sudo chmod -R 777 /opt/jython/cachedir
It may seem as a very simple question, but I could not find any way to fix it.
My intention is to convert every ".ui" file into a ".py" file by invoking the pyuic4 command (from PyQt). I tried to manage this with a very short makefile:
%.py: %.ui
pyuic4 $< --output $#
That's all I need at the moment.
The makefile is named "Makefile" and located in the folder where "make" is invoked from, and so are the ".ui" files. "pyuic4(.bat)" is in the system's path (Windows 7), and so are the Unix Utilities where "make" is part of.
When running "make" from the Windows console, it says:
make: *** No targets. Stop.
Invoking pyuic4 from the command line with explicit file names works.
I know I could specify any target file by its own, but if possible I want to avoid this.
Any ideas?
As per kasterma's comment, you need to tell make which target to build, but you've only provided a pattern rule. This can be done in the following way.
UIFILES := $(wildcard *.ui)
PYFILES := $(UIFILES:.ui=.py)
.PHONY: all
all: $(PYFILES)
%.py: %.ui
pyuic4 $< --output $#
As you are obviously using a GNU Makefile syntax, I would advise you to write your rule like this:
UIFILES = $(wildcard *.ui)
.PHONY: ui2py
ui2py: $(UIFILES)
#for uifile in $(UIFILES); do \
pyuic4 $$(uifile) --output $${uifile%.ui}.py; \
done
Although the problem could be solved by either perror's or eriktous' solution, I'm now going the third way as mentioned by eriktous by simply invoking the pyuic4 command with a shell script. It's running quite fast and even if the output will result in identical files, no harm will be done for the source code control.
I encountered a second point, which may have distracted me. The pyuic4 command is really named pyuic4.bat, which is a "batch file" in Windows, similar to shell scripts in a Linux/Unix environment; similar, but not identical. If a batch file is invoked from another batch file it should be invoked with a leading "call" statement to prevent termination of the batch after the first invocation.
If I have three files (the # sign is to prevent the command from being listed during execution)
D:\Projekte\test>type main.cmd
#sub1
#sub2
D:\Projekte\test>type sub1.cmd
#echo This is sub 1
D:\Projekte\test>type sub2.cmd
#echo This is sub 2
... the result is just
D:\Projekte\test>main
This is sub 1
So my "solution" for this very small thing is a simple batch file called "update.cmd", which may be expanded by copies of this line:
call pyuic4 mainwindow.ui --output mainwindow.py
That's not what I initially wanted, but it works for me.
But anyway, thanks for your help :-)