I'm using openldap on Mac OS X Server 10.6 and need to generate a vcard for all the users in a given group. By using the ldapsearch I can list all the memberUid's for all users in that group. I found a perl script (Advanced LDAP Search or ALS) that was written by someone that will generate the vcard easily. ALS can be found here http://www.ldapman.org/tools/als.gz
So what I need to do is create a wrapper script (in python or perl) that will effectively loop through the memberUid's and run the ALS command to create the vcard and append it to the file.
This command provides the memberUid's:
ldapsearch -x -b 'dc=ldap,dc=server,dc=com' '(cn=testgroup)'
Then running ALS gives the vcard:
als -b dc=ldap,dc=server,dc=com -V uid=aaronh > vcardlist.vcf
If it's easier to do this using Perl since ALS is already using it that would be fine. I've done more work in python but I'm open to suggestions.
Thanks in advance,
Aaron
EDIT:
Here is a link to the Net:LDAP code that I have to date. So far it pulls down the ldap entries with all user information. What I'm missing is how to capture just the UID for each user and then push it into ALS.
http://www.queencitytech.com/net-ldap
Here is an example entry (after running the code from the above link):
#-------------------------------
DN: uid=aaronh,cn=users,dc=ldap,dc=server,dc=com
altSecurityIdentities : Kerberos:aaronh#LDAP.SERVER.COM
apple-generateduid : F0F9DA73-70B3-47EB-BD25-FE4139E16942
apple-imhandle : Jabber:aaronh#ichat.server.com
apple-mcxflags : <?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>simultaneous_login_enabled</key>
<true/>
</dict>
</plist>
authAuthority : ;ApplePasswordServer;0x4c11231147c72b59000001f800001663,1024 35 131057002239213764263627099108547501925287731311742942286788930775556419648865483768960345576253082450228562208107642206135992876630494830143899597135936566841409094870100055573569425410665510365545238751677692308677943427807426637133913499488233527734757673201849965347880843479632671824597968768822920700439 root#ldap.server.com:192.168.1.175;Kerberosv5;0x4c11231147c72b59000001f800001663;aaronh#LDAP.SERVER.COM;LDAP.SERVER.COM;1024 35 131057002239213764263627099108547501925287731311742942286788930775556419648865483768960345576253082450228562208107642206135992876630494830143899597135936566841409094870100055573569425410665510365545238751677692308677943427807426637133913499488233527734757673201849965347880843479632671824597968768822920700439 root#ldap.server.com:192.168.1.170
cn : Aaron Hoffman
gidNumber : 20
givenName : Aaron
homeDirectory : 99
loginShell : /bin/bash
objectClass : inetOrgPersonposixAccountshadowAccountapple-userextensibleObjectorganizationalPersontopperson
sn : Hoffman
uid : aaronh
uidNumber : 2643
userPassword : ********
#-------------------------------
My language of choice would be Perl - but only because I've done similar operations using Perl and LDAP.
If I remember correctly, that ldapsearch command will give you the full LDIF entry for each uid in the testgroup cn. If that's the case, then you'll need to clean it up a bit before it's ready for the als part. Though it's definitely not the most elegant solution, a quick and dirty method is to use backticks and run the command's output through a grep. This will return a nice list of all the memberUids. From there it's just a simple foreach loop and you're done. Without any testing or knowing for sure what your LDAP output looks like, I'd go with something like this:
#!/usr/bin/perl
# should return a list of "memberUid: name" entries
#uids = `ldapsearch -x -b 'cn=testgroup,cn=groups,dc=ldap,dc=server,dc=com' | grep memberUid:`;
foreach (#uids) {
$_ =~ s/memberUid: //; # get rid of the "uid: " part, leaving just the name
chomp $_; # get rid of the pesky newline
system "als -b \"dc=ldap,dc=server,dc=com\" -V uid=$_ >> vcardlist.vcf";
}
As I said, I haven't tested this, and I'm not exactly sure what the output of your ldapsearch looks like, so you may have to tweak it a bit to fit your exact needs. That should be enough to get you going though.
If anyone has a better idea I'd love to hear it too.
Related
I am trying to add a multi-line comment using the following code but this fails with error shown below, any guidance on how to fix it?
message = """PLEASE RESOLVE MERGE CONFLICTS
WHAT DO I HAVE TO DO IN CASE OF MERGE CONFLICTS:
htts://confluence.sd.company.com/display/WFI/AUTO+CHERRY-PICK
""".replace("\n","\n\n")
code_review_minus_two_cmd = "ssh -p 29418 tech-gerrit.sd.company.com gerrit review %s --label Code-Review=-2 --message '%s'"%(propagated_gerrit_commit,message)
code_review_minus_two_cmd_output,code_review_minus_two_cmd_error = runCmd(code_review_minus_two_cmd)
ERROR:-
fatal: "RESOLVE" is not a valid patch set
Seems related to this bug.
The ways I can see to resolve it from looking through the ticket are:
Use -m instead of --message
Add double quotes around the message
Sample from the review in the bug link:
ssh -p 29418 review.example.com gerrit review -m '"Build Successful"'
Hope something here works. I don't have a gerrit account to test out against myself.
You could use a json formatted message. The easiest way would be to create a file with the following content:
{
"labels": {
"Code-Review": "-2"
},
"message": "PLEASE RESOLVE MERGE CONFLICTS\nWHAT DO I HAVE TO DO IN CASE OF MERGE CONFLICTS:\nhttps://confluence.sd.company.com/display/WFI/AUTO+CHERRY-PICK"
}
Then run this ssh command:
cat filename.json | ssh -p 29418 review.example.com gerrit review --json
It seems the double quotes (""") in the "message" definition aren't working as expected. Gerrit is receiving the parameter like this: --message PLEASE RESOLVE MERGE... so it's assuming that "PLEASE" is the message and "RESOLVE" is the next parameter (patchset) as defined in the gerrit review page at the Gerrit documentation.
Try to use backslashes to escape the double quotes, like this:
message = "\"PLEASE RESOLVE MERGE...\""
The following example works in bash.
ssh -p 29418 review.example.com gerrit review -m $'"First line\nSecond line etc."' CHANGE_ID,PATCHSET_ID/COMMIT_SHA
I am using Jenkins python api to create a parameterized job, I can create a job with one parameter using the following config.xml
<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description>A build that explores the wonderous possibilities of parameterized builds.</description>
<keepDependencies>false</keepDependencies>
<properties>
<hudson.model.ParametersDefinitionProperty>
<parameterDefinitions>
<hudson.model.StringParameterDefinition>
<name>B</name>
<description>B, like buzzing B.</description>
<defaultValue></defaultValue>
</hudson.model.StringParameterDefinition>
</parameterDefinitions>
</hudson.model.ParametersDefinitionProperty>
</properties>
<scm class="hudson.scm.NullSCM"/>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers class="vector"/>
<concurrentBuild>false</concurrentBuild>
<builders>
<hudson.tasks.Shell>
<command>ping -c 1 localhost | tee out.txt
echo $A > a.txt
echo $B > b.txt</command>
</hudson.tasks.Shell>
</builders>
<publishers>
<hudson.tasks.ArtifactArchiver>
<artifacts>*</artifacts>
<latestOnly>false</latestOnly>
</hudson.tasks.ArtifactArchiver>
<hudson.tasks.Fingerprinter>
<targets></targets>
<recordBuildArtifacts>true</recordBuildArtifacts>
</hudson.tasks.Fingerprinter>
</publishers>
<buildWrappers/>
</project>
What I really want is to create a job with multiple parameters, I have tried adding a paralleled <name> tag in this xml, but it actually make one parameter in a new job. Do I change the xml mistakenly?
What's more, is it possible to add predefined values in parameter fields in the api? For example, the parameter B would have a value b after job creation.
Each parameter needs its own hudson.model.StringParameterDefinition section. It sounds like you're trying to put multiple names inside one StringParameterDefintion section, which won't work.
If in doubt, create the job by hand, then go to the page for that job and append '/config.xml'. You will get back the XML for the job. Get your Python to replicate that, and you're all set.
So yeah, I've been working on a python script that extracts the password hash from a Mac.
Now I wanna take it to the next level, crack it.
After some quick research i found John the Ripper(http://www.openwall.com/john/) and decided to try and use that. (Note: I have tried other softwares, but none of them have been able to crack my test-hash.
The problem is, when i try to start john the ripper, it fails me. (Im using some custom mac 1.7.3 version, haven't tried updating yet and I would prefer not to)
Current script(after about 1 000 000 changes and retries:
output__ = "1dc74ff22b199305242d62f76f6a5c5c47b4c2e3"
print output__
txt = file('john/sha1.txt','wt')
sha1textfile = "%s:%s" % (output2[0], output__)
txt.write(sha1textfile)
txt2 = file('startjohn.command', 'wt')
stjtextfile = """
#!/bin/bash
cd /hax/john
./run/john sha1.txt
"""
txt2.write(stjtextfile)
shell('chmod 777 startjohn.command')
shell('open startjohn.command')
Now I the error i get is:
/hax/startjohn.command ; exit;
My-MacBook:~ albertfreakman$ /hax/startjohn.command ; exit;
No password hashes loaded
logout
Help me solve this problem and save me from insanity!
Sincerely, Duke.
Some quick notes:
Output__ is my test hash, already got the extract hash part working.
If you have a solution that uses any other Hashcracker than John, thats even better! As long as it can either use a wordlist, or bruteforce.
The hash is SHA1
Thanks!
Okay I found the problem, my test hash didn't have CAPITAL LETTERS and therefore weren't accepted by john the ripper.
I have a specific use. I am preparing for GRE. Everytime a new word comes, I look it up at
www.mnemonicdictionary.com, for its meanings and mnemonics. I want to write a script in python preferably ( or if someone could provide me a pointer to an already existing thing as I dont know python much but I am learning now) which takes a list of words from a text file, and looks it up at this site, and just fetch relevant portion (meaning and mnemonics) and store it another text file for offline use. Is it possible to do so ?? I tried to look up the source of these pages also. But along with html tags, they also have some ajax functions.
Could someone provide me a complete way how to go about this ??
Example: for word impecunious:
the related html source is like this
<ul class='wordnet'><li><p>(adj.) not having enough money to pay for necessities</p><u>synonyms</u> : hard up , in straitened circumstances , penniless , penurious , pinched<p></p></li></ul>
but the web page renders like this:
•(adj.) not having enough money to pay for necessities
synonyms : hard up , in straitened circumstances , penniless , penurious , pinched
If you have Bash (version 4+) and wget, an example
#!/bin/bash
template="http://www.mnemonicdictionary.com/include/ajaxSearch.php?word=%s&event=search"
while read -r word
do
url=$(printf "$template" "$word")
data=$(wget -O- -q "$url")
data=${data#* }
echo "$word: ${data%%<*}"
done < file
Sample output
$> more file
synergy
tranquil
jester
$> bash dict.sh
synergy: the working together of two things (muscles or drugs for example) to produce an effect greater than the sum of their individual effects
tranquil: (of a body of water) free from disturbance by heavy waves
jester: a professional clown employed to entertain a king or nobleman in the Middle Ages
Update: Include mneumonic
template="http://www.mnemonicdictionary.com/include/ajaxSearch.php?word=%s&event=search"
while read -r word
do
url=$(printf "$template" "$word")
data=$(wget -O- -q "$url")
data=${data#* }
m=${data#*class=\'mnemonic\'}
m=${m%%</p>*}
m="${m##* }"
echo "$word: ${data%%<*}, mneumonic: $m"
done < file
Use curl and sed from a Bash shell (either Linux, Mac, or Windows with Cygwin).
If I get a second I will write a quick script ... gotta give the baby a bath now though.
I have a XML file with the following data format:
<net NetName="abc" attr1="123" attr2="234" attr3="345".../>
<net NetName="cde" attr1="456" attr2="567" attr3="678".../>
....
Can anyone tell me how could I data mine the XML file using an awk one-liner? For example, I would like to know attr3 of abc. It will return 345 to me.
In general, you don't. XML/HTML parsing is hard enough without trying to do it concisely, and while you may be able to hack together a solution that succeeds with a limited subset of XML, eventually it will break.
Besides, there are many great languages with great XML parsers already written, so why not use one of them and make your life easier?
I don't know whether or not there's an XML parser built for awk, but I'm afraid that if you want to parse XML with awk you're going to get a lot of "hammers are for nails, screwdrivers are for screws" answers. I'm sure it can be done, but it's probably going to be easier for you to write something quick in Perl that uses XML::Simple (my personal favorite) or some other XML parsing module.
Just for completeness, I'd like to note that if your snippet is an example of the entire file, it is not valid XML. Valid XML should have start and end tags, like so:
<netlist>
<net NetName="abc" attr1="123" attr2="234" attr3="345".../>
<net NetName="cde" attr1="456" attr2="567" attr3="678".../>
....
</netlist>
I'm sure invalid XML has its uses, but some XML parsers may whine about it, so unless you're dead set on using an awk one-liner to try to half-ass "parse" your "XML," you may want to consider making your XML valid.
In response to your edits, I still won't do it as a one-liner, but here's a Perl script that you can use:
#!/usr/bin/perl
use strict;
use warnings;
use XML::Simple;
sub usage {
die "Usage: $0 [NetName] ([attr])\n";
}
my $file = XMLin("file.xml", KeyAttr => { net => 'NetName' });
usage() if #ARGV == 0;
exists $file->{net}{$ARGV[0]}
or die "$ARGV[0] does not exist.\n";
if(#ARGV == 2) {
exists $file->{net}{$ARGV[0]}{$ARGV[1]}
or die "NetName $ARGV[0] does not have attribute $ARGV[1].\n";
print "$file->{net}{$ARGV[0]}{$ARGV[1]}.\n";
} elsif(#ARGV == 1) {
print "$ARGV[0]:\n";
print " $_ = $file->{net}{$ARGV[0]}{$_}\n"
for keys %{ $file->{net}{$ARGV[0]} };
} else {
usage();
}
Run this script from the command line with 1 or 2 arguments. The first argument is the 'NetName' you want to look up, and the second is the attribute you want to look up. If no attribute is given, it should just list all the attributes for that 'NetName'.
I have written a tool called xml_grep2, based on XML::LibXML, the perl interface to libxml2.
You would find the value you're looking for by doing this:
xml_grep2 -t '//net[#NetName="abc"]/#attr3' to_grep.xml
The tool can be found at http://xmltwig.com/tool/
xmlgawk can use XML very easily.
$ xgawk -lxml 'XMLATTR["NetName"]=="abc"{print XMLATTR["attr3"]}' test.xml
This one liner can parse XML and print "345".
If you do not have xmlgawk and your XML format is fixed, normal awk can do.
$ nawk -F '[ ="]+' '/abc/{for(i=1;i<=NF;i++){if($i=="attr3"){print $(i+1)}}}' test.xml
This script can return "345".
But I think it is very dangerous because normal awk can not use XML.
You might try this nifty little script: http://awk.info/?doc/tools/xmlparse.html