I've been using python-libtorrent to check what pieces belong to a file in a torrent containing multiple files.
I'm using the below code to iterate over the torrent file
info = libtorrent.torrent_info('~/.torrent')
for f in info.files():
print f
But this returns <libtorrent.file_entry object at 0x7f0eda4fdcf0> and I don't know how to extract information from this.
I'm unaware of the torrent_info property which would return piece value information of various files. Some help is appreciated.
the API is documented here and here. Obviously the python API can't always be exactly as the C++ one. But generally the interface takes a file index and returns some property of that file.
Related
I am setting up a python script to upload files to an Azure FileShare. I am following this official tutorial for SDK v2.
Now, the tutorial uses this argument: content_settings=ContentSettings(content_type='image/png'). My point is that the class ContentSettings is not very well documented and I have no idea what values of content_type are valid. This is the official documentation I could find. Does anyone know where I can find a list of valid content types for this argument? Does anyone know what happens if this argument is mispecified?
Does anyone know what happens if this argument is mispecified?
Adding to what #GauravMantri have mentioned, when you are trying to set the content type of the file other than the actual content type, the file will be no longer be in a valid format except for few of the content types and cannot be read.
For demonstration purposes, I have used the same sample as mentioned in the Official Documentation. Below is the code I'm using.
azure.storage.file import FileService
file_service = FileService(account_name='<ACCOUNT_NAME>',
account_key='<ACCOUNT_KEY>')
file_service.create_file_from_path(
'<FILESHARE_NAME>',
None,
'sunset.png',
'sunset.png',
content_settings=ContentSettings(content_type='video/x-msvideo')) ```
RESULTS:
Below are the results that I got when I'm trying to download a image/png file as video/x-msvideo content type.
I'm trying to analyze a midi file with music21 to get the keys of that file. Does anyone know the command for that or where to find an example for this?
I'm new to this.
Thank you a lot in advance.
Assuming you need to analyze with a key-finding algorithm (as opposed to just reading the key signature provided by the encoder, if present), then create a music21.stream.Score and call analyze("key"):
my_score: music21.stream.Score = music21.converter.parse('path.mid')
k = my_score.analyze('key')
print(k.name)
Some other fun stuff like alternateInterpretations and correlationCoefficient are described in the User's Guide. Enjoy!
I have hundreds of gigs of Evtx security event logs I want to parse for specific Event IDs (4624) and usernames (joe) based on the Event IDs. I have attempted to use Powershell cmdlet like below:
get-winevent -filterhashtable #{Path="mypath.evtx"; providername="securitystuffprovider"; id=4624}
I know I can pass a variable containing a list to the Path parameter for all of my evtx files, but I am unable to filter based on a subset of the message of the EVTX. Also, this takes an incredibly long time to parse just one Evtx file much less 150 or so. I know there is a python package to parse Evtx but I am not sure how that would look as the python-evtx parser doesn't provide great examples of importing and using the package itself. I can not extract all of the data into csv as that would take too much disk space. Any ideas on how would be amazing. Thanks.
Use -Path with the -FilterXPath parameter, and then filter using an XPath expression like so:
$Username = 'jdoe'
$XPathFilter = "*[System[(EventID=4624)] and EventData[Data[#Name='SubjectUserName'] and (Data='$Username')]]"
Get-WinEvent -Path C:\path\to\log\files\*.evtx -FilterXPath $XPathFilter
I am very new to Python and am not very familiar with the data structures in Python.
I am writing an automatic JSON parser in Python, the JSON message is read into a dictionary using Ultra-JSON:
jsonObjs = ujson.loads(data)
Now, if I try something like:
jsonObjs[param1][0][param2] it works fine
However, I need to get the path from an external source (I read it from the DB), we initially thought we'll just write in the DB:
myPath = [param1][0][param2]
and then try to access:
jsonObjs[myPath]
But after a couple of failures I realized I'm trying to access:
jsonObjs[[param1][0][param2]]
Is there a way to fix this without parsing myPath?
Many thanks for your help and advice
Store the keys in a format that preserves type information, e.g. JSON, and then use reduce() to perform recursive accesses on the structure.
I'm almost an absolute beginner in Python, but I am asked to manage some difficult task. I have read many tutorials and found some very useful tips on this website, but I think that this question was not asked until now, or at least in the way I tried it in the search engine.
I have managed to write some url in a csv file. Now I would like to write a script able to open this file, to open the urls, and write their content in a dictionary. But I have failed : my script can print these addresses, but cannot process the file.
Interestingly, my script dit not send the same error message each time. Here the last : req.timeout = timeout
AttributeError: 'list' object has no attribute 'timeout'
So I think my script faces several problems :
1- is my method to open url the right one ?
2 - and what is wrong in the way I build the dictionnary ?
Here is my attempt below. Thanks in advance to those who would help me !
import csv
import urllib
dict = {}
test = csv.reader(open("read.csv","rb"))
for z in test:
sock = urllib.urlopen(z)
source = sock.read()
dict[z] = source
sock.close()
print dict
First thing, don't shadow built-ins. Rename your dictionary to something else as dict is used to create new dictionaries.
Secondly, the csv reader creates a list per line that would contain all the columns. Either reference the column explicitly by urllib.urlopen(z[0]) # First column in the line or open the file with a normal open() and iterate through it.
Apart from that, it works for me.