Python Nmap - Argument Parsing - python

I'm trying to improve my python and working with the Violent Python book to help me do that.
One of the tasks is to create a Python nmap scanner, which I've done and can successfully scan a host, checking against a list of ports. However, the scanner is using the default -sV switch for version scanning, however I want to try and implement a way to change the type of scan that the user wants to run, i.e. -sU for UDP etc...
My code is available at: https://absentia.mycorneroftheinter.net/james/violentPythonScripts/src/master/chapter2/pyPortScanNmap.py
(It is a self-signed cert, so you will get a warning...)
Back to my question... when I try and code the option for a specify a different scanning option, such as -sU for UDP, the program crashes out saying that another argument is required - which is the IP address of the host to scan, however I would have already specified that using the -H 172.16.133.136 switch.
I think that I have missed something when I try and put different scan functionality in, because I thought specifying a different switch would just replace the default scan type that nmap.py uses(?)
Can someone shed any light on where I have gone wrong? You can see in the code I have comments where I have tried to implement the additional option, but alas, not successfully.
EDIT:
As per the comments below, when I supply the --ping switch for example, and then supply a random value to this, I get the following stack trace error returned:
Traceback (most recent call last):
File "pyPortScanNmap.py", line 88, in <module>
main()
File "pyPortScanNmap.py", line 62, in main
parser.add_option("", dest = 'tcpScan', type = 'string', help = 'Run TCP Scan') # Run a TCP scan against the specified host(s)
File "/usr/lib/python2.7/optparse.py", line 1013, in add_option
option = self.option_class(*args, **kwargs)
File "/usr/lib/python2.7/optparse.py", line 566, in __init__
opts = self._check_opt_strings(opts)
File "/usr/lib/python2.7/optparse.py", line 586, in _check_opt_strings
raise TypeError("at least one option string must be supplied")
TypeError: at least one option string must be supplied

Related

Pyinstaller not allowing multiprocessing with MacOS

I have a python file that I would like to package as an executable for MacOS 11.6.
The python file (called Service.py) relies on one other json file and runs perfectly fine when running with python. My file uses argparse as the arguments can differ depending on what is needed.
Example of how the file is called with python:
python3 Service.py -v Zephyr_Scale_Cloud https://myurl.cloud/ philippa#email.com password1 group3
The file is run in exactly the same way when it is an executable:
./Service.py -v Zephyr_Scale_Cloud https://myurl.cloud/ philippa#email.com password1 group3
I can package the file using PyInstaller and the executable runs.
Command used to package the file:
pyinstaller --paths=/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ Service.py
However, when I get to the point that requires multiprocessing, the arguments get lost. My second argument (here noted as https://myurl.cloud) is a URL that I require.
The error I see is:
[MAIN] Starting new process RUNID9157
url before constructing the client recognised as pipe_handle=15
usage: Service [-h] test_management_tool url
Service: error: the following arguments are required: url
Traceback (most recent call last):
File "urllib3/connection.py", line 174, in _new_conn
File "urllib3/util/connection.py", line 72, in create_connection
File "socket.py", line 954, in getaddrinfo
I have done some logging and the url does get correctly read. But as soon as the process started and picking up what it needs to, the url is changed to 'pipe_handle=x', in the code above it is pipe_handle=15.
I need the url to retrieve an authentication token, but it just stops being read as the correct value and is changed to this pipe_handle value. I have no idea why.
Has anyone else seen this?!
I am using Python 3.9, PyInstaller 4.4 and ArgParse.
I have also added
if __name__ == "__main__":
if sys.platform.startswith('win'):
# On Windows - multiprocessing is different to Unix and Mac.
multiprocessing.freeze_support()
to my if name = "main" section as I saw this on other posts but it doesn't help.
Can someone please assist?
Sending commands via sys.argv is complicated by the fact that multiprocessing's "spawn" start method uses that to pass the file descriptors for the initial communication pipes between the parent and child.
I'm projecting here a little because you did not share the code of how/where you call argparse, and how/where you call multiprocessing
If you are parsing args outside of if __name__ == "__main__":, the args may get parsed (re-parsed on child import __main__) before sys.argv gets automatically cleaned up by multiprocessing.spawn.prepare() in the child. You should be able to fix this by moving the argparse stuff inside your target function. It also may be easier to parse the args in the parent, and simply send the parsed results as an argument to the target function. See this answer of mine for further discussion on sys.argv with multiprocessing.

Pyshark - tshark can't use user plugin in 'decode_as'

I use Pyshark that uses tshark to decode a pcap file, and I have a problem using 'decode_as' option.
I'm trying to decode a specific UDP port as SOMEIP protocol. This is a dissector I added that is taken from here.
It is important to say that both the dissector and the "decode_as" option work perfectly in Wireshark.
This is the code I use:
import pyshark
packets=pyshark.FileCapture(pcap_path, display_filter="udp")
packets.next() # Works fine
packets=pyshark.FileCapture(pcap_path, display_filter="udp", decode_as={"udp.port==50000":"someip"})
packets.next() # doesn't return a packet
There is also an ignored exception:
Exception ignored in: <function Capture.__del__ at 0x000001D9CE035268>
Traceback (most recent call last):
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 412, in __del__
self.close()
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 403, in close
self.eventloop.run_until_complete(self._close_async())
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\asyncio\base_events.py", line 573, in run_until_complete
return future.result()
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 407, in _close_async
await self._cleanup_subprocess(process)
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 400, in _cleanup_subprocess
% process.returncode)
pyshark.capture.capture.TSharkCrashException: TShark seems to have crashed (retcode: 1). Try rerunning in debug mode [ capture_obj.set_debug() ] or try updating tshark.
As it recommends I use debug mode(packets.set_debug()), and after running it I get:
tshark: Protocol "someip" isn't valid for layer type "udp.port"
tshark: Valid protocols for layer type "udp.port" are:
....
and then a long list of protocols, which "someip" is not in... (but another dissector that I added, and is dll, is)
Any idea what is wrong here?
Does the dissector causes the problems, or did I do something wrong?
Again- the "decode as" works fine when done manually in Wireshark.
Thanks!
EDIT
I found the part in Wireshark code that causes this error:
So I read about dissector tables, and it seems that there shouldn't be a problem, since the dissector lua code does add "someip" to the dissector table of "udp.port":
local udp_dissector_table = DissectorTable.get("udp.port")
-- Register dissector to multiple ports
for i,port in ipairs{30490,30491,30501,30502,30503,30504} do
udp_dissector_table:add(port,p_someip)
tcp_dissector_table:add(port,p_someip)
end
I also tried to use the dissectortable:add_for_decode_as(proto) function (described in 11.6.2.11 here):
udp_dissector_table:add_for_decode_as(p_someip)
But it didn't work :(
Any idea will be appreciated, thanks
Even though it is an old question:
I tried with a pcap of mine at it worked. So 3 suggestions:
There has been a bug, which is fixed now - then it should work for you now as well
The udp port is wrong. I do have a different one (30490) and if this one is wrong, the package will be empty. Please try with 50001, as this port shows on your screenshot
The pcap has some problems, in this case, try with another one.
Hope that helps!

Connect to openstack is failing

I have written a bit of python code to interact with an Openstack instance; using the shade library.
The call
myinstance = shade.openstack_cloud(cloud='mycloud', **auth_data)
works fine on my local Ubuntu installation; but fails on our "backend" servers (running RHEL 7.2).
File "mystuff/core.py", line 248, in _create_connection
myinstance = shade.openstack_cloud(cloud='mycloud', **auth_data)
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/init.py", line 106, in openstack_cloud
return OpenStackCloud(cloud_config=cloud_config, strict=strict)
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/openstackcloud.py", line 312, in init
self._local_ipv6 = _utils.localhost_supports_ipv6()
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/_utils.py", line 254, in localhost_supports_ipv6
return netifaces.AF_INET6 in netifaces.gateways()['default']
AttributeError: module 'netifaces' has no attribute 'AF_INET6'
The admin for that system tells me that IPv6 is not enabled there; maybe that explains the fail. I did some research, but couldn't find anything to prevent the failure.
Any thoughts are welcome.
Update: I edited my clouds.yml; and it looks like this:
# openstack/shade config file
# required to connect provisioning using the shade module
client:
force_ipv4: true
clouds:
mycloud:
auth:
user_domain_name: xxx
auth_url: 'someurl'
region_name: RegionOne
I also tried export OS_FORCE_IPV4=True - but the error message is still there.
If you go through the OpenStack os-client-config documentation, there they have mentioned about IPV6 related issue.
IPv6 is the future, and you should always use it if your cloud
supports it and if your local network supports it. Both of those are
easily detectable and all friendly software should do the right thing.
However, sometimes you might exist in a location where you have an
IPv6 stack, but something evil has caused it to not actually function.
In that case, there is a config option you can set to unbreak you
force_ipv4, or OS_FORCE_IPV4 boolean environment variable.
So, using these boolean config you can force appropriate network protocol. Adding below lines to your clouds.yaml file
client:
force_ipv4: true
will force IPV4 and hope it will resolve your issue.
Edit by OP: unfortunately the above doesn't help; fixed it by reworking shade-1.20.0-py3.5.egg/shade/_utils.py: I changed the return statement
return netifaces.AF_INET6 in netifaces.gateways()['default']`
to a simple
return False
And stuff is working. Of course, this is just a workaround; but a bug report was filed as well.

Control firewalld in CentOS via Python's dbus module?

My goal is to automate configuring firewalls on CentOS 7 machines using Python.
The OS comes with firewalld, so that's what I'm using. I looked into it and found that it uses dbus (I've never heard of or dealt with any of this - please correct me if anything I say is incorrect.)
I found this documentation for how to control dbus processes using Python:
http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.txt
I checked and the version of Python that comes with the OS includes the dbus module, so it seems like a promising start.
That document suggests that I needed to learn more about what firewalld exposes via the dbus interface. So I did some more research and found this:
https://www.mankier.com/5/firewalld.dbus
The first document says I need to start out with a "well-known name". Their example for such a thing was org.freedesktop.NetworkManager. The second document is titled firewalld.dbus, so I figured that was as good a name as any to try since the document doesn't explicitly give a name anywhere else.
The first document also says I need a name for an object path. Their example is /org/freedesktop/NetworkManager. The second document has an object path of /org/fedoraproject/FirewallD1.
I put those together and tried using the first method the first document suggested, SystemBus's get_object():
>>> from dbus import SystemBus
>>> bus = SystemBus()
>>> proxy = bus.get_object('firewalld.dbus', '/org/fedoraproject/FirewallD1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 241, in get_object
follow_name_owner_changes=follow_name_owner_changes)
File "/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 248, in __init__
self._named_service = conn.activate_name_owner(bus_name)
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 180, in activate_name_owner
self.start_service_by_name(bus_name)
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 278, in start_service_by_name
'su', (bus_name, flags)))
File "/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in call_blocking
message, timeout)
dbus.exceptions.DBusException:
org.freedesktop.DBus.Error.ServiceUnknown:
The name firewalld.dbus was not provided by any .service files
I also gave org.fedoraproject.FirewallD1 a try as the first parameter but ended up with a similar error message.
Why are these not working? Is there some way I can discover what the proper names are? It mentions ".service files" at the end of the error message... where would such a file be located?
Edit: Found several ".service files" by using find / -name *.service. One of them is at /usr/lib/systemd/system/firewalld.service... seems pretty promising so I'll check it out.
Edit 2: It's a rather short file... only about 10 lines. One of them says BusName=org.fedoraproject.FirewallD1. So I'm not sure why it said the name was not provided by any .service files... unless it's not using this file for some reason?
If the unit file says:
BusName=org.fedoraproject.FirewallD1
Then maybe you should try using that as your bus name:
>>> import dbus
>>> bus = dbus.SystemBus()
>>> p = bus.get_object('org.fedoraproject.FirewallD1', '/org/fedoraproject/FirewallD1')
>>> p.getDefaultZone()
dbus.String(u'FedoraWorkstation')
I figured this out based on the fact that this:
>>> help(bus.get_object)
Says that the get_object call looks like:
get_object(self, bus_name, object_path, introspect=True, follow_name_owner_changes=False, **kwargs)

Using python subprocess to fake running a cmd from a terminal

We have a vendor-supplied python tool ( that's byte-compiled, we don't have the source ). Because of this, we're also locked into using the vendor supplied python 2.4. The way to the util is:
source login.sh
oupload [options]
The login.sh just sets a few env variables, and then 2 aliases:
odownload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_download_command.pyc "$#"
}
oupload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_upload_command.pyc "$#"
}
Now, when I run it their way - works fine. It will prompt for a username and password, then do it's thing.
I'm trying to create a wrapper around the tool to do some extra steps after it's run and provide some sane defaults for the utility. The problem I'm running into is I cannot, for the life of me, figure out how to use subprocess to successfully do this. It seems to realize that the original command isn't running directly from the terminal and bails.
I created a '/usr/local/bin/oupload' and copied from the original login.sh. Only difference is instead of doing an alias at the end, I actually run the command.
Then, in my python script, I try to run my new shell script:
if os.path.exists(options.zipfile):
try:
cmd = string.join(cmdargs,' ')
p1 = Popen(cmd, shell=True, stdin=PIPE)
But I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 112, in getAuth
Empty Username not legal
Unknown Error Encountered
SUMMARY:
Name: Empty Username not legal
Description: None
So it seemed like an extra carriage return was getting sent ( I tried rstripping all the options, didn't help ).
If I don't set stdin=PIPE, I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 109, in getAuth
IOError: [Errno 5] Input/output error
Unknown Error Encountered
I've tried other variations of using p1.communicate, p1.stdin.write() along with shell=False and shell=True, but I've had no luck in trying to figure out how to properly send along the username and password. As a last result, I tried looking at the byte code for the utility they provided - it didn't help - once I called the util's main routine with the proper arguments, it ended up core dumping w/ thread errors.
Final thoughts - the utility doesn't want to seem to 'wait' for any input. When run from the shell, it pauses at the 'Username' prompt. When run through python's popen, it just blazes thru and ends, assuming no password was given. I tried to lookup ways of maybe preloading the stdin buffer - thinking maybe the process would read from that if it was available, but couldn't figure out if that was possible.
I'm trying to stay away from using pexpect, mainly because we have to use the vendor's provided python 2.4 because of the precompiled libraries they provide and I'm trying to keep distribution of the script to as minimal a footprint as possible - if I have to, I have to, but I'd rather not use it ( and I honestly have no idea if it works in this situation either ).
Any thoughts on what else I could try would be most appreciated.
UPDATE
So I solved this by diving further into the bytecode and figuring out what I was missing from the compiled command.
However, this presented two problems -
The vendor code, when called, was doing an exit when it completed
The vendor code was writing to stdout, which I needed to store and operate on ( it contains the ID of the uploaded pkg ). I couldn't just redirect stdout, because the vendor code was still asking for the username/password.
1 was solved easy enough by wrapping their code in a try/except clause.
2 was solved by doing something similar to: https://stackoverflow.com/a/616672/677373
Instead of a log file, I used cStringIO. I also had to implement a fake 'flush' method, since it seems the vendor code was calling that and complaining that the new obj I had provided for stdout didn't supply it - code ends up looking like:
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = StringIO()
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
self.terminal.flush()
self.log.flush()
if os.path.exists(options.zipfile):
try:
os.environ['OCLI_CODESET'] = 'ISO-8859-1'
backup = sys.stdout
sys.stdout = output = Logger()
# UploadCommand was the command found in the bytecode
upload = UploadCommand()
try:
upload.main(cmdargs)
except Exception, rc:
pass
sys.stdout = backup
# now do some fancy stuff with output from output.log
I should note that the only reason I simply do a 'pass' in the except: clause is that the except clause is always called. The 'rc' is actually the return code from the command, so I will probably add handling for non-zero cases.
I tried to lookup ways of maybe preloading the stdin buffer
Do you perhaps want to create a named fifo, fill it with username/password info, then reopen it in read mode and pass it to popen (as in popen(..., stdin=myfilledbuffer))?
You could also just create an ordinary temporary file, write the data to it, and reopen it in read mode, again, passing the reopened handle as stdin. (This is something I'd personally avoid doing, since writing username/passwords to temporary files is often of the bad. OTOH it's easier to test than FIFOs)
As for the underlying cause: I suspect that the offending software is reading from stdin via a non-blocking method. Not sure why that works when connected to a terminal.
AAAANYWAY: no need to use pipes directly via Popen at all, right? I kinda laugh at the hackishness of this, but I'll bet it'll work for you:
# you don't actually seem to need popen here IMO -- call() does better for this application.
statuscode = call('echo "%s\n%s\n" | oupload %s' % (username, password, options) , shell=True)
tested with status = call('echo "foo\nbar\nbar\nbaz" |wc -l', shell = True) (output is '4', naturally.)
The original question was solved by just avoiding the issue and not using the terminal and instead importing the python code that was being called by the shell script and just using that.
I believe J.F. Sebastian's answer would probably work better for what was originally asked, however, so I'd suggest people looking for an answer to a similar question look down the path of using the pty module.

Categories

Resources