I'm using Xbee3 and I want to append data to file.
I try this script for test but I received EEXIST error if TEST.txt file exists. If this file doesn't exist the file is created for the first run but I get same error when I run this script again.
f = open("TEST.txt", 'a')
for a in range(3):
f.write("#EMPTY LINE#\n")
f.close()
Traceback (most recent call last):
File "main", line 1, in
OSError: [Errno 7017] EEXIST
I formatted xbee by the way.
It's sounds like you're using an 802.15.4, DigiMesh or Zigbee module. The file system in those modules is extremely limited, and doesn't allow for modifying existing files. There should be documentation on the product that lists those limitations (no rename, no modify/append, only one open file at a time, etc.)
XBee/XBee3 Cellular modules have a fuller file system implementation that allows for renaming files and modifying file contents.
Related
Abstract:
I am analysing a pcap file, with live malware (for educational purposes), and using Wireshark - I managed to extract few objects from the HTTP stream and some executables.
During my Analysis, I found instances hinting Fiestka Exploit Kit used.
Having Googled a ton, I came across a GitHub Rep: https://github.com/0x3a/tools/blob/master/fiesta-payload-decrypter.py
What am I trying to achieve?
I am trying to run the python fiesta-payload-decrypter.py against the malicious executable (extracted from the pcap).
What have I done so far?
I've copied the code onto a plain text and saved it as malwaredecoder.py. - This script is saved in the same Folder (/Download/Investigation/) as the malware.exe that I want to run it against.
What's the Problem?
Traceback (most recent call last):
File "malwaredecoder.py", line 51, in <module>
sys.exit(DecryptFiestaPyload(sys.argv[1], sys.argv[2]))
File "malwaredecoder.py", line 27, in DecryptFiestaPyload
fdata = open(inputfile, "rb").read()
IOError: [Errno 2] No such file or directory: '-'
I am running this python script in Kali Linux, and any help would be much appreciated. Thank you.
The script expects two args... What are you passing it?
Looks like it expects the args to be files and it sees a -, (dash), as the input file.
https://github.com/0x3a/tools/blob/master/fiesta-payload-decrypter.py#L44 Here it looks like the first arg is the input file and second is the output file.
Try running it like this:
python malewaredecoder.py /Download/Investigation/fileImInvestigating.pcap /Download/Investigation/out.pcap
All that said, good luck, that script looks pretty old and was last modified in 2015.
I am working on a project in Python in which I am parsing data from a zipped folder containing log files. The code works fine for most zips, but occasionally this exception is thrown:
[Errno 22] Invalid argument
As a result, the entire file is skipped, thus excluding the data in the desired log files from the results. When I try to extract the zipped file using the default Windows utility, I am met with this error:
Zip error
However, when I try to extract the file with 7zip, it does so successfully, save 2 errors:
1 <path> Unexpected End of Data
2 Data error: x.csv
x.csv is totally unrelated to the log I am trying to parse, and as such, I need to write code that is resilient to the point where if an unrelated file is corrupted, it will still be able to parse the other logs that are not.
At the moment, I am using the zipfile module to extract the files into memory. Is there a robust way to do this without the entire file being skipped?
Update 1: I believe the error I am running into is that the zipfile is missing a footer. I realized this when looking at it in a hex editor. I do not really have any idea how to safely edit the actual file using Python.
Here is the code that I am using to extract zips into memory:
for zip in os.listdir(directory):
try:
if zip.lower().endswith('.zip'):
if os.path.isfile(directory + "\\" + zip):
logs = zipfile.ZipFile(directory + "\\" + zip)
for log in logs.namelist():
if log.endswith('log.txt'):
data = logs.read(log)
Edit 2: Traceback for the error:
Traceback (most recent call last):
File "c:/Users/xxx/Desktop/Python Porjects/PE/logParse.py", line 28, in parse
logs = zipfile.ZipFile(directory + "\\" + zip)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python37\lib\zipfile.py", line 1222, in __init__
self._RealGetContents()
File "C:\Users\xxx\AppData\Local\Programs\Python\Python37\lib\zipfile.py", line 1289, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
The stacktrace seems to show that it's not your code which badly manage to read the file but the Python module managing zip that is raising an error.
It looks like that python zip manager is more strict than other program (see this bug where a user report a difference between python behaviour and other program as GNOME Archive Manager).
Maybe, there is a bug report to do.
I have several script running on a server which pickle and unpickle various dictionaries. They all use the same basic code for pickling as shown below:
SellerDict=open('/home/hostadl/SellerDictkm','rb')
SellerDictionarykm=pickle.load(SellerDict)
SellerDict.close()
SellerDict=open('/home/hostadl/SellerDictkm','wb')
pickle.dump(SellerDictionarykm,SellerDict)
SellerDict.close()
All the scripts run fine except for one of them. The one that has issues goes to various websites and scrapes data and stores it in dictionary. This code runs all day long pickling and unpickling dictionaries and stops at midnight. A cronjob then starts it again
the next morning. This script can run for weeks without having a problem but about once a month the script dies due to a EOFError when it tries to open a dictionary. The size of the dictionaries are usually about 80 MB. I even tried adding SellerDict.flush() before SellerDict.close() when pickling the data to make sure evening was being flushed.
Any idea's what could be causing this? Python is pretty solid so I don't think it is due to the size of the file. Where the code runs fine for a long time before dying it leads me to believe that maybe something is being saved in the dictionary that is causing this issue but I have no idea.
Also, if you know of a better way to be saving dictionaries other than pickle I am open to options. Like I said earlier, the dictionaries are constantly being opened and closed. Just for clarification, only one program will use the same dictionary so the issue is not being caused by several programs trying to access the same dictionary.
UPDATE:
Here is the traceback that I have from a log file.
Traceback (most recent call last):
File "/home/hostadl/CompileRecentPosts.py", line 782, in <module>
main()
File "/home/hostadl/CompileRecentPosts.py", line 585, in main
SellerDictionarykm=pickle.load(SellerDict)
EOFError
So this actually turned out to be a memory issue. When the computer would run out of RAM and try to unpickle or load the data, the process would fail claiming this EOFError. I increase the RAM on the computer and this never was an issue again.
Thanks for all the comments and help.
Here's what happens when you don't use locking:
import pickle
# define initial dict
orig_dict={'foo':'one'}
# write dict to file
writedict_file=open('./mydict','wb')
pickle.dump(orig_dict,writedict_file)
writedict_file.close()
# read the dict from file
readdict_file=open('./mydict','rb')
mydict=pickle.load(readdict_file)
readdict_file.close()
# now we have new data to save
new_dict={'foo':'one','bar':'two'}
writedict_file=open('./mydict','wb')
#pickle.dump(orig_dict,writedict_file)
#writedict_file.close()
# but...whoops! before we could save the data
# some other reader tried opening the file
# now they are having a problem
readdict_file=open('./mydict','rb')
mydict=pickle.load(readdict_file) # errors out here
readdict_file.close()
Here's the output:
python pickletest.py
Traceback (most recent call last):
File "pickletest.py", line 26, in <module>
mydict=pickle.load(readdict_file) # errors out here
File "/usr/lib/python2.6/pickle.py", line 1370, in load
return Unpickler(file).load()
File "/usr/lib/python2.6/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.6/pickle.py", line 880, in load_eof
raise EOFError
EOFError
Eventually, some read process is going to try to read the pickled file while a write process already has it open to write. You need to make sure that you have some way to tell whether another process already has a file open for writing before you try to read from it.
For a very simple solution, have a look at this thread that discusses using Filelock.
I'm reading a bunch of netcdf files using the pupynere interface (linux). The following code results in an mmap error:
import numpy as np
import os, glob
from pupynere import NetCDFFile as nc
alts = []
vals = []
path='coll_mip'
filter='*.nc'
for infile in glob.glob(os.path.join(path, filter)):
curData = nc(infile,'r')
vals.append(curData.variables['O3.MIXING.RATIO'][:])
alts.append(curData.variables['ALTITUDE'][:])
curData.close()
Error:
$ python2.7 /mnt/grid/src/profile/contra.py
Traceback (most recent call last):
File "/mnt/grid/src/profile/contra.py", line 15, in <module>
File "/usr/lib/python2.7/site-packages/pupynere-1.0.13-py2.7.egg/pupynere.py", line 159, in __init__
File "/usr/lib/python2.7/site-packages/pupynere-1.0.13-py2.7.egg/pupynere.py", line 386, in _read
File "/usr/lib/python2.7/site-packages/pupynere-1.0.13-py2.7.egg/pupynere.py", line 446, in _read_var_array
mmap.error: [Errno 24] Too many open files
Interestingly, if I comment one of the append commands (either will do!) it works! What am I doing wrong? I'm closing the file, right? This is somehow related to the python list. I used a different, inefficient approach before (always copying each element) that worked.
PS: ulimit -n yields 1024, program fails at file number 498.
maybe related to, but solution doesn't work for me: NumPy and memmap: [Errno 24] Too many open files
My guess is that the mmap.mmap call in pupynere is holding the file descriptor open (or creating a new one). What if you do this:
vals.append(curData.variables['O3.MIXING.RATIO'][:].copy())
alts.append(curData.variables['ALTITUDE'][:].copy())
#corlettk: yeah since it is linux, do strace -e trace=file will do
strace -e trace=file,desc,munmap python2.7 /mnt/grid/src/profile/contra.py
This will show exactly which file is opened when - and even the file decriptors.
You can also use
ulimit -a
To see what limitations are currently in effect
Edit
gdb --args python2.7 /mnt/grid/src/profile/contra.py
(gdb) break dup
(gdb) run
If that results in too many breakpoints prior to the ones related to the mapped files, you might want to run it without breakpoints for a while, break it manually (Ctrl+C) and set the breakpoint during 'normal' operation; that is, if you have enough time for that :)
Once it breaks, inspect the call stack with
(gdb) bt
Hmmm... Maybe, just maybe, with curData might fix it? Just a WILD guess.
EDIT: Does curData have a Flush method, perchance? Have you tried calling that before Close?
EDIT 2:
Python 2.5's with statement (lifted straight from Understanding Python's "with" statement)
with open("x.txt") as f:
data = f.read()
do something with data
... basically it ALLWAYS closes the resource (much like C#'s using construct).
How expensive is the nc() call? If it is 'cheap enough' to run twice on every file, does this work?
for infile in glob.glob(os.path.join(path, filter)):
curData = nc(infile,'r')
vals.append(curData.variables['O3.MIXING.RATIO'][:])
curData.close()
curData = nc(infile,'r')
alts.append(curData.variables['ALTITUDE'][:])
curData.close()
Well, almost everything is in title. I have a dbf file which I would like to copy even if it is locked (edited) by another program like DBU.
If I try to open it or copy with shutil.copy I get
>>> f = open('test.dbf', 'rb')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 13] Permission denied: 'test.dbf'
I know that it is locked on windows level because I am unable to copy it witch batch or with windows explorer. But is there any method to copy such a file?
In general, you can't. Even if you were to circumvent the locking mechanism, another process might be in the middle of writing to the file, and the snapshot you would take may be in an inconsistent state.
Depending on your use case, Volume Shadow Copy might be of relevance.
There is a tool from Joakim Schicht that copies any locked file.
The only issue is that some AV tag it as malicious, when it is not.
Depending on your use case, this can be a solution.
https://github.com/jschicht/RawCopy