By Default Duo Sync runs once Daily, due to the demand of business this needs to be done every 2 hours. looking at DUO API there is a Command for User Sync:
python -m duo_client.client --ikey <> --skey <> --host api-<>.duosecurity.com --method POST --path /admin/v1/users username=<> /directorysync/<DIR SYNC>/syncuser
However I don't see an API for a general overall sync with the Active Directory So to combat such, I was hoping to get all the users from the 2FA Group and Sync via username over a loop using the following:
import sys
import os
import duo_client
from ldap3 import Server, Connection, ALL, NTLM, ALL_ATTRIBUTES, ALL_OPERATIONAL_ATTRIBUTES, AUTO_BIND_NO_TLS, SUBTREE
from ldap3.core.exceptions import LDAPCursorError
server_name = ''
domain_name = ''
user_name = ''
password = '!'
admin_api = duo_client.Admin(
ikey= "",
skey= "",
host= "api-.duosecurity.com",)
format_string = '{:40}'
print(format_string.format('samaccountname'))
server = Server(server_name, get_info=ALL)
conn = Connection(server, user='{}\\{}'.format(domain_name, user_name), password=password, authentication=NTLM,
auto_bind=True)
conn.search('dc={},dc=int'.format(domain_name), '(&(objectCategory=user)(memberOf=CN=2FA,OU=,OU=,OU=,OU=,DC=,DC=int))',
attributes=[ALL_ATTRIBUTES, ALL_OPERATIONAL_ATTRIBUTES])
for e in sorted(conn.entries):
print(e.samaccountname)
os.system("python -m duo_client.client --ikey --skey --host api-.duosecurity.com --method POST --path /admin/v1/users username={}/directorysync//syncuser".format(e.samaccountname))"
The above code some what works, but for some users it also re-creates them as the following: User_IDs such as "username/Dir/DIRAPI/usersync". as showing in images below Duo API
Syncing User
It seemed the username={} was in the wrong
The below is to Create a new user hence why i was seeing username/..../....
Post /admin/v1/users username={}
Below is the Right way for using the API Call.
os.system("python -m duo_client.client --ikey --skey --host api-.duosecurity.com --method POST --path /admin/v1/users/directorysync/syncuser username={}".format(e.samaccountname))"
Related
Using python 3.10.10 on Windows 10 I am trying to connect to a mongo database via ssh ideally. On the command line I just do
ssh myuser#111.222.333.444
mongo
and I can query the mongo DB. With the following python code
from pymongo import MongoClient
from pymongo.errors import ConnectionFailure
HOST = "111.222.333.444"
USER = "myuser"
class Mongo:
def __init__(self):
self.host = HOST
self.user = USER
self.uri = f"mongodb://{self.user}#{self.host}"
def connection(self):
try:
client = MongoClient(self.uri)
client.server_info()
print('Connection Established')
except ConnectionFailure as err:
raise(err)
return client
mongo = Mongo()
mongo.connection()
however I get an error
pymongo.errors.ConfigurationError: A password is required.
But as I am able to just login via ssh using my public key I do not require a password. How can this be solved in python?
I also tried to run a command on the command line using ssh alone like
ssh myuser#111.222.333.444 "mongo;use mydb; show collections"
but this does not work like that either.
You do two different things. In the first command you connect via ssh (using port 22) to the remote server. On the remote server you start the mongo shell. In the second command, you try to connect directly to the mongod server (default port 27017).
In your case myuser is the user on remote server operating system, not the user on the MongoDB.
You can (almost) always connect to a MongoDB without username/password, however when you provide a username then you also need a password. Try
self.uri = f"mongodb://{self.host}"
It is not fully clear what you try to achieve. You can configure MongoDB to logon with x509 certificate instead of username/password, see Use x.509 Certificates to Authenticate Clients. These connections are also encrypted via TLS/SSL.
Or are you looking to configure a SSH-Tunnel? See https://serverfault.com/questions/597765/how-to-connect-to-mongodb-server-via-ssh-tunnel
Here is the solution that I found in the end, as simple as possible, and it can be run from within python, and without any special module to install, from a windows powershell:
import json
import subprocess
cmd_mongo = json.dumps('db.units.find({"UnitId": "971201065"})')
cmd_host = json.dumps(f"mongo mydb --eval {cmd_mongo}")
cmd_local = f"ssh {USER}#{HOST} \"{cmd_host}\""
output = subprocess.check_output(cmd_local, shell=True)
print(output)
I am trying to deploy a cloud function (gen2) in GCP but running into the same issue and get this error with each deploy when Cloud Functions sets up Cloud Run:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
MAIN.PY
from google.cloud import pubsub_v1
from google.cloud import firestore
import requests
import json
from firebase_admin import firestore
import google.auth
credentials, project = google.auth.default()
# API INFO
Base_url = 'https://xxxxxxxx.net/v1/feeds/sportsbookv2'
Sport_id = 'xxxxxxxx'
AppID = 'xxxxxxxx'
AppKey = 'xxxxxxxx'
Country = 'en_AU'
Site = 'www.xxxxxxxx.com'
project_id = "xxxxxxxx"
subscription_id = "xxxxxxxx-basketball-nba-events"
timeout = 5.0
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(project_id, subscription_id)
db = firestore.Client(project='xxxxxxxx')
def winodds(message: pubsub_v1.subscriber.message.Message) -> None:
events = json.loads(message.data)
event_ids = events['event_ids']
url = f"{Base_url}/betoffer/event/{','.join(map(str, event_ids))}.json?app_id={AppID}&app_key={AppKey}&local={Country}&site={Site}"
print(url)
windata = requests.get(url).text
windata = json.loads(windata)
for odds_data in windata['betOffers']:
if odds_data['betOfferType']['name'] == 'Head to Head' and 'MAIN' in odds_data['tags']:
event_id = odds_data['eventId']
home_team = odds_data['outcomes'][0]['participant']
home_team_win_odds = odds_data['outcomes'][0]['odds']
away_team = odds_data['outcomes'][1]['participant']
away_team_win_odds = odds_data['outcomes'][1]['odds']
print(f'{event_id} {home_team} {home_team_win_odds} {away_team} {away_team_win_odds}')
# WRITE TO FIRESTORE
doc_ref = db.collection(u'xxxxxxxx').document(u'basketball_nba').collection(u'win_odds').document(f'{event_id}')
doc_ref.set({
u'event_id': event_id,
u'home_team': home_team,
u'home_team_win_odds': home_team_win_odds,
u'away_team': away_team,
u'away_team_win_odds': away_team_win_odds,
u'timestamp': firestore.SERVER_TIMESTAMP,
})
streaming_pull_future = subscriber.subscribe(subscription_path, callback=winodds)
print(f"Listening for messages on {subscription_path}..\n")
# Wrap subscriber in a 'with' block to automatically call close() when done.
with subscriber:
try:
# When `timeout` is not set, result() will block indefinitely,
# unless an exception is encountered first.
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel() # Trigger the shutdown.
streaming_pull_future.result() # Block until the shutdown is complete.
if __name__ == "__main__":
winodds()
DOCKER FILE
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.10
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
ENV GOOGLE_APPLICATION_CREDENTIALS /app/xxxxx-key.json
ENV PORT 8080
# Install production dependencies.
RUN pip install functions-framework
RUN pip install -r requirements.txt
# Run the web service on container startup.
CMD exec functions-framework --target=winodds --debug --port=$PORT
I am using PyCharm and it all seems to work locally when I run via Docker, Main.py, and Cloud Run locally. But as soon as I deploy I get an error straight away.
Please can someone point me in the right direction? Where do I need to edit the ports # so my cloud function will deploy successfully?
The above error could be caused by configuration issues for the listener port which could be some mismatches in the user defined values settings.
You may check and verify the following pointers to understand the probable cause of the error and rectify these to try and eliminate the issue:
Check if you configured your service to listen to all network
interfaces,commonly denoted as 0.0.0.0 in Troubleshooting issues
Configured the PORT following Google best practices
Configured the PORT in your application as per “Deploy a Python
service to Cloud Run” guide.
You may check the following simple example initially to check if these are working properly.
const port = parseInt(process.env.PORT) || 8080;
app.listen(port, () => {
console.log(`helloworld: listening on port ${port}`);
});
I need your advice on how to go about the following:
I am running a Counter Strike Game server on an Azure VM
The server is nothing but a Desktop utility that runs a service on certain port
to which my friends connect and play the game
At times, I have to change the "map" on the game which means I have to go/login on the
Azure VM and manually perform some actions (very basic like increase
round-time for a match) or kick a user
Every time, I have to login on server and do this manually. If I want someone else to do it, then I
will have to give them the VM access which is not what I want
I know moderate Python, so I can write a script that would actually perform
this action (using Python OS library)
But I want to be able to run this script remotely from a web app or any other trigger
So the use case would look like this:
A user goes to a URL (myGameserver.com:8989)
He sees a list of available users in the game. He selects one and presses KICK button
The kick button kicks that user from server by running a program that I mentioned in point 5 above
The thing I need your help with is; What type of app /tech should I use to make this web application and How do I pass this "user selection" as an input to my program so that it knows what user to kick ?
Here is a simple flask api to kick players by steamid using rcon.
You can use curl to post data to it like this:
curl -H "Content-Type: application/json" \
--d '{"steamid":"STEAM_1:0:1234567"}' \
https://myGameserver.com:8989/kickplayer
The api:
from flask import Flask, request
import valve.rcon
app = Flask(__name__)
HOST = "0.0.0.0"
PORT = 8989
DEBUG = True
# replace with your VM's local ip if you are running this web app on the same VM as the csgo server
# or with the azure public ip if not. (You will need to open up port 27015 TCP for rcon)
CSGO_SERVER_IP = "10.0.100.100"
RCON_PORT = 27015
RCON_PASS = "supersekret123456"
#app.route('/kickplayer', methods=['POST'])
def kick_player():
data = request.get_json()
steamid = data["steamid"] # STEAM_1:0:XXXXXXXXX
rcon_command = "sm_kick " + steamid
with valve.rcon.RCON((CSGO_SERVER_IP, RCON_PORT), RCON_PASS) as rcon:
response = rcon.execute(rcon_command)
return f"Kicked player {steamid}"
if __name__ == '__main__':
app.run(debug=DEBUG, host=HOST, port=PORT)
Feel free to message me on discord for more advice, I run csgo servers and have written csgo web panels before: BurningTimes#1938
I need to ssh to a remote Ubuntu server to do some routine job, in following steps:
ssh in as userA
sudo su - userB
run daliy_python.py script with use psycopg2 to read some info from the database (via local connection (non-TCP/IP))
scp readings to my local machine
The question is: How to do that automatically?
I've try to use Fabric, but I run into a problem with psycopg2, after I run the Fabric script below, I received error from my daliy_python.py
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/xxx/.s.xxxx"?
My fabfile.py code is as below:
from fabric.api import *
import os
import socket
import pwd
# Target machine setting
srv = 'server.hostname.com'
env.hosts = [srv]
env.user = 'userA'
env.key_filename = '/location/to/my/key'
env.timeout = 2
# Force fabric abort at timeout
env.skip_bad_hosts = False
def run_remote():
user = 'userB'
with settings(warn_only=True):
run('whoami')
with cd('/home/%s/script/script_folder' % user):
sudo('whoami')
sudo('pwd', user=user)
sudo('ls', user=user)
sudo('python daliy_python.py', user=user)
Any suggestions? My database can only be access via userB locally, but only userA can ssh to the server. That might be a limitation. Both local and remote machine is running Ubuntu 14.04.
This is what I do to read my root accessible logfiles without extra login
ssh usera#12.34.56.78 "echo hunter2 | sudo -S tail -f /var/log/nginx/access.log"
That is: ssh usera#12.34.56.78 "..run this code on the remote.."
Then on the remote, you pipe the sudo password into sudo -S echo hunter2 | sudo -S
Add a -u userb to sudo to switch to a particular user, I am using root in my case. So then as the sudo'ed user, run your script. In my case tail -f /var/log/nginx/access.log.
But, reading your post, I would probably simply set up a cronjob on the remote, so it runs automatically. I actually do that for all my databases. A cronjob dumps them once a day to a certain directory, with the date as filename. Then I download them to my local PC with rsync an hour later.
I finally find out where my problem is.
Thanks #chishake and #C14L, I look at the problem in another way.
After inspired by this posts link1 link2, I start to think this problem is related to environmental variables.
Thus I add a with statement to alter $HOME and it worked.
fabfile.py is as below:
from fabric.api import *
import os
import socket
import pwd
# Target machine setting
srv = 'server.hostname.com'
env.hosts = [srv]
env.user = 'userA'
env.key_filename = '/location/to/my/key'
env.timeout = 2
# Force fabric abort at timeout
env.skip_bad_hosts = False
def run_remote():
user = 'userB'
with settings(warn_only=True):
run('whoami')
with shell_env(HOME='/home/%s' % user):
sudo('echo $HOME', user=user)
with cd('/home/%s/script/script_folder' % user):
sudo('whoami')
sudo('pwd', user=user)
sudo('ls', user=user)
sudo('python daliy_python.py', user=user)
I am new to python and have run in to a problem with the following.
This is a code snippet from the Splunk api, thats used to connect to a splunk server then print the installed apps.
import splunklib.client as client
HOST = "server.splunk"
PORT = 8089
USERNAME = "UserABC"
PASSWORD = "Passw000rd"
# Create a Service instance and log in
service = client.connect(
host=HOST,
port=PORT,
username=USERNAME,
password=PASSWORD)
# Print installed apps to the console to verify login
for app in service.apps:
print app.name
I've tried it on my system in cmd and it works fine. However I intend to use this function in a Robot Framework test so the function needs to be defined in order to have a keyword I can use. I'm guessing something like the following:
import splunklib.client as client
def setServer(HOST, PORT, USERNAME, PASSWORD):
HOST = "server.splunk"
PORT = 8089
USERNAME = "UserABC"
PASSWORD = "Passw000rd"
service = client.connect(host=HOST,port=PORT,username=USERNAME,password=PASSWORD)
for app in service.apps:
print app.name
print ("\n")
My problem is when I run this nothing is printed to CMD at all. Any ideas
Thanks
A print in Python library is not displayed on the console of Robot Framework, that is the expected behaviour. If you want to check that the piece of code was run and the print was done, check the log.html produced by Robot. It should contain your print. Then if you really want to display something on Robot Console, then you have to use Log To Console keyword from your Robot Test. But as your print is in the python lib, you will have to import BuiltIn lib within your Python code. With all that, you should be fine.