I am writing a plugin for Ida (in python) that utilizes the Etcd remote key value storage system. My problem is that when I attempt to get a lock on the server
lock = etcd.Lock(self.client, 'ida_lock')
Should timeout after 30 seconds. Hopefully that is enough.
lock.acquire(blocking=True,lock_ttl=None,timeout=30)
if lock.is_acquired:
data,idc_file = self.get_idc_data()
if os.path.isfile('expendable.idc'):
self.client.write('/fREd/' + self.md5 + '/all/', idc_file, prevValue = open('expendable.idc','r').readlines())
else:
self.client.write('/fREd/' + self.md5 + '/all/', idc_file)
lock.release()
like so, Ida freezes and I was wondering if anyone had any insight on why this is happening or how to fix it.
So for reference the method that includes this is called via a keyboard shortcut
idaapi.add_hotkey('Ctrl-.', self.push_data)
and there is no doubt that it is the lock that causes the problem.
You can look at the python-etcd source at https://github.com/jplana/python-etcd
There are keys already exist under the directory /_locks/ida_lock.
To list the files under /_locks/ida_lock:
etcdctl ls /_locks/ida_lock
To rescue yourself from this, run :
etcdctl rm /_locks/ida_lock --dir --recursive
To avoid this situation, you may run lock.release() in the finally block as if you don't release, there will be a file remained under /_locks/ida_lock.
Further more, you can add some logging configs (which you can reference here) to dig more when dealing with this kind of problems.
Related
I want to run Python code as a COM server. Eventually I want to run an RTD server available here. But first I want to know what exactly you have to do to getting any COM server running. So let's focus on this example.
class HelloWorld:
_reg_clsid_ = "{7CC9F362-486D-11D1-BB48-0000E838A65F}"
_reg_desc_ = "Python Test COM Server"
_reg_progid_ = "Python.TestServer"
_public_methods_ = ['Hello']
_public_attrs_ = ['softspace', 'noCalls']
_readonly_attrs_ = ['noCalls']
def __init__(self):
self.softspace = 1
self.noCalls = 0
def Hello(self, who):
self.noCalls = self.noCalls + 1
# insert "softspace" number of spaces
return "Hello" + " " * self.softspace + who
if __name__=='__main__':
import win32com.server.register
win32com.server.register.UseCommandLine(HelloWorld)
Ok, this works in the way that there were no errors and server is registered, hence it is available in the HKEY_CLASSES_ROOT registry. But what can I do with this? Some say you have to compile a instance and have a .dll or .exe file. WHat else do I have to do?
Well, I ran your example. The registry key for the server is at:
HKEY_CLASSES_ROOT\WOW6432Node\CLSID\{7CC9F362-486D-11D1-BB48-0000E838A65F}
It has two subkeys... one for LocalServer32 and one for InProcServer32
I created a simple VBA macro in Excel:
Sub d()
Set obj = CreateObject("Python.TestServer")
MsgBox obj.Hello("joe")
End Sub
Macro ran just fine. My version of Excel is 64-bit. I ran the macro and then fired up Task Manager while the message box was being displayed. I could see pythonw.exe running in the background.
The only difference between my python script and yours is probably the name and also that I added a line to print to make sure I was executing the function:
if __name__=='__main__':
import win32com.server.register
print("Going to register...")
win32com.server.register.UseCommandLine(HelloWorld)
When I ran the 64-bit csript.exe test, it worked... as expected... when I ran the 32-bit version it failed.
I know why...sort of...
The registry entry for InProcServer32 is pythoncom36.dll
That's no good. It is an incomplete path. I tried modifying the path variable on my shell to add to one of the 3 places where the DLL existed on my system, but it didn't work. Also, tried coding the path in the InProcServer32. That didn't work.. kept saying it couldn't find the file.
I ran procmon, and then I observerved that it couldn't load vcruntime140.dll. Found the directory under python where those files were, and added to my path. It got further along. If I cared enough, I might try more. Eventually using procmon, I could find all the problems. But you can do that.
My simple solution was to rename the key InProcServer32 for the CLSID to be _InProcServer32. How does that work? Well, the system can't find InProcServer32 so it always uses LocalServer32--for 32-bit and 64-bit processes. If you need the speed of in process then you'd need to fix the problem by using procmon and being relentless until you solved all the File Not Found errors and such. But, if you don't need the speed of in process, then just using the LocalServer32 might solve the problem.
Caveats I'm using an Anaconda distro that my employer limits access to and I can only install it from the employee store. YMMV.
I'm running Twisted (Python 2.7.x) on Alpine Linux 3.7 inside Docker.
I now wanted to use the twisted.internet.inotify module, but it fails loading.
It's triggering the following exception in twisted.python._inotify:
name = ctypes.util.find_library('c')
if not name:
raise ImportError("Can't find C library.")
libc = ctypes.cdll.LoadLibrary(name)
initializeModule(libc)
The problem is that Alpine Linux 3.x has a bug which makes ctypes.util.find_library('c') return None.
I've compared the code with the inotify module, which I've successfully used in Alpine before, and that one deals with the issue in the following way:
_FILEPATH = ctypes.util.find_library('c')
if _FILEPATH is None:
_FILEPATH = 'libc.so.6'
instance = ctypes.cdll.LoadLibrary(_FILEPATH)
So i've tried calling ctypes.util.find_library('libc.so.6') in the interpreter, and that call succeeds.
What I now want to do is to monkey-patch twisted.python._inotify so that it loads libc.so.6 instead of c, but I'm unaware of how I can do that, because I can't load the module at all.
I have one option, which is to sed the source code during docker build, or possibly even inside the server right after it starts, but that feels like a hack.
I've seen that Twisted contains a MonkeyPatch module, but I have no idea on how to use it, or if it is even suited for this task.
How can I solve this problem in the cleanest possible way?
Note: The server is running as non-root, so it has no write access to /usr/lib/python2.7/site-packages/twisted/python/_inotify.py.
This means that I either have to sed it in the Dockerfile, or patch in in-memory when the server starts, before it loads the module (if that's possible, I'd prefer that).
In addition to anything else, I hope that you contribute a patch to Twisted to either solve this problem outright or make it easier to solve from application code or at an operations level.
That said, here's a monkey-patch that should do for you:
import ctypes.util
def fixed_find_library(name):
if name == "c":
result = original_find_library(name)
if result is not None:
return result
else:
return "libc.so.6"
return original_find_library(name)
original_find_library = ctypes.util.find_library
ctypes.util.find_library = fixed_find_library
# The rest of your application code...
This works simply by codifying the logic from your question which you suggest works around the problem. As long as this code runs before _inotify.py is imported then when it does get imported it will end up using the "fixed" version instead of the original.
While monkey-patching as Jean-Paul indicates seems to be the best fix, here is an approach which modifies Twisted's source code.
When using the Python Docker API to run the container:
container = client.containers.run(...)
patch = 'sed -i.bak s/raise\ ImportError/name\ =\ \\"libc.so.6\\"\ #\ raise\ ImportError/g /usr/lib/python2.7/site-packages/twisted/python/_inotify.py'
print container.exec_run(patch, user='root')
or when in bash inside the container:
sed -i.bak s/raise\ ImportError/name\ =\ \"libc.so.6\"\ #\ raise\ ImportError/g /usr/lib/python2.7/site-packages/twisted/python/_inotify.py
I am running a python script on several Linux nodes (after the creation of a pool) using Azure Batch. Each node uses 14.04.5-LTS version of Ubuntu.
In the script, I am uploading several files on each node and then I run several tasks on each one of these nodes. But, I get a "Permission Denied" error when I try to execute the first task. Actually, the task is an unzip of few files (fyi, the uploading of these zip files went well).
This script was running well until last weeks. I suspect an update of Ubuntu version but maybe it's something else.
Here is the error I get :
error: cannot open zipfile [ /mnt/batch/tasks/shared/01-AXAIS_HPC.zip ]
Permission denied
unzip: cannot find or open /mnt/batch/tasks/shared/01-AXAIS_HPC.zip,
Here is the main part of the code :
credentials = batchauth.SharedKeyCredentials(_BATCH_ACCOUNT_NAME,_BATCH_ACCOUNT_KEY)
batch_client = batch.BatchServiceClient(
credentials,
base_url=_BATCH_ACCOUNT_URL)
create_pool(batch_client,
_POOL_ID,
application_files,
_NODE_OS_DISTRO,
_NODE_OS_VERSION)
helpers.create_job(batch_client, _JOB_ID, _POOL_ID)
add_tasks(batch_client,
_JOB_ID,
input_files,
output_container_name,
output_container_sas_token)
with add_task :
def add_tasks(batch_service_client, job_id, input_files,
output_container_name, output_container_sas_token):
print('Adding {} tasks to job [{}]...'.format(len(input_files), job_id))
tasks = list()
for idx, input_file in enumerate(input_files):
command = ['unzip -q $AZ_BATCH_NODE_SHARED_DIR/01-AXAIS_HPC.zip -d $AZ_BATCH_NODE_SHARED_DIR',
'chmod a+x $AZ_BATCH_NODE_SHARED_DIR/01-AXAIS_HPC/00-EXE/linux/*',
'PATH=$PATH:$AZ_BATCH_NODE_SHARED_DIR/01-AXAIS_HPC/00-EXE/linux',
'unzip -q $AZ_BATCH_TASK_WORKING_DIR/'
'{} -d $AZ_BATCH_TASK_WORKING_DIR/{}'.format(input_file.file_path,idx+1),
'Rscript $AZ_BATCH_NODE_SHARED_DIR/01-AXAIS_HPC/03-MAIN.R $AZ_BATCH_TASK_WORKING_DIR $AZ_BATCH_NODE_SHARED_DIR/01-AXAIS_HPC $AZ_BATCH_TASK_WORKING_DIR/'
'{} {}' .format(idx+1,idx+1),
'python $AZ_BATCH_NODE_SHARED_DIR/01-IMPORT_FILES.py '
'--storageaccount {} --storagecontainer {} --sastoken "{}"'.format(
_STORAGE_ACCOUNT_NAME,
output_container_name,
output_container_sas_token)]
tasks.append(batchmodels.TaskAddParameter(
'Task{}'.format(idx),
helpers.wrap_commands_in_shell('linux', command),
resource_files=[input_file]
)
)
Split = lambda tasks, n=100: [tasks[i:i+n] for i in range(0, len(tasks), n)]
SPtasks = Split(tasks)
for i in range(len(SPtasks)):
batch_service_client.task.add_collection(job_id, SPtasks[i])
Do you have any insights to help me on this issue? Thank you very much.
Robin
looking at the error, i.e.
error: cannot open zipfile [ /mnt/batch/tasks/shared/01-AXAIS_HPC.zip ]
Permission denied unzip: cannot find or open /mnt/batch/tasks/shared/01-AXAIS_HPC.zip,
This seems like that the file is not present at the current shared directory location or it is is not in correct permission. The former is more likely.
Is there any particular reason you are using the shared directory way? also, How are you uploading the file? (i.e. hope that the use of async and await is correctly done, i.e. there is not greedy process which is running your task before the shared_dir stuff is available to the node.)
side note: you own the node so you can RDP / SSH into the node and find it out that the shared_dir are actually present.
Few things to ask will be: how are you uploading these zip files.
Also if I may ask, what is the Design \ user scenario here and how exactly you are intending to use this.
Recommendation:
There are few other ways you can use zip files in the azure node, like via resourcefile or via application package. (The applicaiton package way might suite it better to deal with *.zip file) I have added few documetns and places you can have a look at the sample implementation and guidance for this.
I think a good place to start are: hope material and sample below will help you. :)
Also I would recommend to recreate your pool if it is old which will ensure you have the node running at the latest version.
Azure batch learning path:
Azure batch api basics
Samples & demo link or look here
Detailed walk through depending on what you are using i.e. CloudServiceConfiguration or VirtualMachineConfiguration link.
I'm trying to send a notification to KDE's knotify from a cron job. The code below works fine but when I run it as a cron job the notification doesnt appear.
#!/usr/bin/python2
import dbus
import gobject
album = "album"
artist = "artist"
title = "title"
knotify = dbus.SessionBus().get_object("org.kde.knotify", "/Notify")
knotify.event("warning", "kde", [], title, u"by %s from %s" % (artist, album), [], [], 0, 0, dbus_interface="org.kde.KNotify")
Anyone know how I can run this as a cron job?
You need to supply an environment variable called DBUS_SESSION_BUS_ADDRESS.
You can get the value from a running kde session.
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:abstract=/tmp/dbus-iHb7INjMEc,guid=d46013545434477a1b7a6b27512d573c
In your kde startup (autostart module in configuration), create a script entry to run after your environment starts up. Output this environment variable value to a temp file in your home directory and then you can set the environment variable within your cron job or python script from the temp file.
#!/bin/bash
echo $DBUS_SESSION_BUS_ADDRESS > $HOME/tmp/kde_dbus.session
As of 2019 KDE5, it still works but is slightly different results:
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:path=/run/user/1863/bus
To test it, you can do the following:
$ qdbus org.freedesktop.ScreenSaver /ScreenSaver SimulateUserActivity
You may need to use qdbus-qt5 if you still have the old kde4 binaries installed along with kde5. You can determine which one you should use with the following:
export QDBUS_CMD=$(which qdbus-qt5 2> /dev/null || which qdbus || exit 1)
I run this with a sleep statement when I want to prevent my screensaver from engaging and it works. I run it remotely from another computer beside my main one.
For those who want to know how I lock and unlock the remote screensaver, it's a different command...
loginctl lock-session 1
or
loginctl unlock-session 1
That is assuming that your session is the first one. You can add scripts to the KDE notification events for screensaver start and stop. Hope this information helps someone who wants to synchronize their screen savers across more than one computer.
I know this is long answer, but I wanted to provide an example for you to test with and a practical use case where I use it today.
I'm writing an FTP client using Twisted that downloads a lot of files and I'm trying to do it pretty intelligently. However, I've been having the problem that I'll download several files very quickly (sometimes ~20 per batch, sometimes ~250) and then the downloading will hang, only to eventually have connections time out and then the download and hang start all over again. I'm using a DeferredSemaphore to only download 3 files at a time, but I now suspect that this is probably not the right way to avoid throttling the server.
Here is the code in question:
def downloadFiles(self, result, directory):
# make download directory if it doesn't already exist
if not os.path.exists(directory['filename']):
os.makedirs(directory['filename'])
log.msg("Downloading files in %r..." % directory['filename'])
files = filterFiles(None, self.fileListProtocol)
# from http://stackoverflow.com/questions/2861858/queue-remote-calls-to-a-python-twisted-perspective-broker/2862440#2862440
# use a DeferredSemaphore to limit the number of files downloaded simultaneously from the directory to 3
sem = DeferredSemaphore(3)
jobs = [sem.run(self.downloadFile, f, directory) for f in files]
d = gatherResults(jobs)
return d
def downloadFile(self, f, directory):
filename = os.path.join(directory['filename'], f['filename']).encode('ascii')
log.msg('Downloading %r...' % filename)
d = self.ftpClient.retrieveFile(filename, FTPFile(filename))
return d
You'll noticed that I'm reusing an FTP connection (active, by the way) and using my own FTPFile instance to make sure the local file object gets closed when the file download connection is 'lost' (ie completed). Looking at FTPClient I wonder if I should be using queueCommand directly. To be honest, I got lost following the retrieveFile command to _openDataConnection and beyond, so maybe it's already being used.
Any suggestions? Thanks!
I would suggest using queueCommand, as you suggested I'd suspect the semaphore you're using is probably causing you issues. I believe using queueCommand will limit your FTPClient to a single active connection (though I'm just speculating), so you may want to think about creating a few FTPClient instances and passing download jobs to them if you want to do things quickly. If you use queueStringCommand, you get a Deferred that you can use to determine where each client is up to, and even add another job to the queue for that client in the callback.