I am using txloadbalancer twisted API in my application and it works great. I have one problem tho, I can't figure out a way too add hosts to a running instance.
I use this function for now:
#pm is a ProxyManager
def addServiceToPM(pm, service):
if isinstance(service, model.HostMapper):
[service] = model.convertMapperToModel([service])
for groupName, group in pm.getGroups(service.name):
proxiedHost = service.getGroup(groupName).getHosts()[0][1]
pm.getGroup(service.name, groupName).addHost(proxiedHost)
tracker = HostTracking(group)
scheduler = schedulers.schedulerFactory(group.lbType, tracker)
pm.addTracker(service.name, groupName, tracker)
and run it with a new host
addServiceToPM(pm, HostMapper(proxy='127.0.0.1:8080', lbType=roundr,
host='host2', address='127.0.0.1:10002'))
This adds the host correctly to the tracker, but not to the proxy service and it is thus not used in the load balancing. Do anyone have a clue about how to do this?
So, I ended up staring at the source code until the answer appeared.
If you, as in my case, want to add a new host to a existing proxy and group you use
def addHostToLB(pm, proxy, group, newHost, newHostName):
tracker = pm.getTracker(proxy, group)
tracker.newHost(newHost, newHostName)
Related
I've created an ec2 instance with AWS CDK in python. I've added a security group and allowed ingress rules for ipv4 and ipv6 on port 22. The keypair that I specified, with the help of this stack question has been used in other EC2 instances set up with the console with no issue.
Everything appears to be running, but my connection keeps timing out. I went through the checklist of what usually causes this provided by amazon, but none of those common things seems to be the problem (at least to me).
Why can't I connect with my ssh keypair from the instance I made with AWS CDK? I'm suspecting the KeyName I am overriding is not the correct name in Python, but I can't find it in the cdk docs.
Code included below.
vpc = ec2.Vpc.from_lookup(self, "VPC", vpc_name=os.getenv("VPC_NAME"))
sec_group = ec2.SecurityGroup(self, "SG", vpc=vpc, allow_all_outbound=True)
sec_group.add_ingress_rule(ec2.Peer.any_ipv4(), connection=ec2.Port.tcp(22))
sec_group.add_ingress_rule(ec2.Peer.any_ipv6(), connection=ec2.Port.tcp(22))
instance = ec2.Instance(
self,
"name",
vpc=vpc,
instance_type=ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO),
machine_image=ec2.AmazonLinuxImage(
generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2
),
security_group=sec_group,
)
instance.instance.add_property_override("KeyName", os.getenv("KEYPAIR_NAME"))
elastic_ip = ec2.CfnEIP(self, "EIP", domain="vpc", instance_id=instance.instance_id)
This is an issue with internet reachability, not your SSH key.
By default, your instance is placed into a private subnet (docs), so it will not have inbound connectivity from the internet.
Place it into a public subnet and it should work.
Also, you don't have to use any overrides to set the key - use the built-in key_name argument. And you don't have to create the security group - use the connections abstraction. Here's the complete code:
vpc = ec2.Vpc.from_lookup(self, "VPC", vpc_name=os.getenv("VPC_NAME"))
instance = ec2.Instance(
self,
"name",
vpc=vpc,
instance_type=ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO),
machine_image=ec2.AmazonLinuxImage(
generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2
),
key_name=os.getenv("KEYPAIR_NAME"),
vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC),
)
instance.connections.allow_from_any_ipv4(ec2.Port.tcp(22))
elastic_ip = ec2.CfnEIP(self, "EIP", domain="vpc", instance_id=instance.instance_id)
I am currently underway with my Senior Capstone project, in which I am to write a somewhat basic program which allows a custom interface on my iPhone6 device to remotely control or issue critical commands to a NIDS (Suricata) established at my home RaspberryPi(3B+) VPN. My question, however, is whether it's feasible to write said program which can allow remote access control of basic functions/response options on the Pi's IDS, given that I am utilizing it as a device within the VPN network. The main issue would be establish remote signaling to the iOS device whenever there is an anomaly and allowing it to respond back and execute root-level commands on the NIDS.
If it is of any good use, I am currently using Pythonista as a runtime environment on my mobile device and have set my VPN's connection methods to UDP, but I'm not sure if enabling SSH would assist me. I have a rather basic understanding of how to operate programming in regards to network connectivity. I very much appreciate any and all the help given!
from tkinter import *
window=Tk()
window.geometry("450x450")
window.title("IDS Response Manager")
label1=Label(window,text="Intrusion Response Options",fg= 'black',bg ='white',relief="solid",font=("times new roman",12,"bold"))
label1.pack()
button1=Button(window,text="Terminate Session",fg='white', bg='brown',relief=RIDGE,font=("arial",12,"bold"))
button1.place(x=50,y=110) #GROOVE ,RIDGE ,SUNKEN ,RAISED
button2=Button(window,text="Packet Dump",fg='white', bg='brown',relief=RIDGE,font=("arial",12,"bold"))
button2.place(x=220,y=110) #GROOVE ,RIDGE ,SUNKEN ,RAISED
button3=Button(window,text="Block Port",fg='white', bg='brown',relief=RIDGE,font=("arial",12,"bold"))
button3.place(x=110,y=170) #GROOVE ,RIDGE ,SUNKEN ,RAISED
Very basic options as are shown here.
You can use a flask server with an API, which you can send post requests to. You can then send get requests to receive the commands. To host your API, look at Heroku (free tier available, and very much functional, with already configured app_name.herokuapp.com).
Search up to send a post request with the technologies you are using to build your app. Send keyword command with the command to the /send_commands along with the password, "password_here" (changeable to anything you want).
Python:
Modules: Flask (server), request (client)
Server Code:
from flask import Flask
app = Flask(__name__)
commands = []
#app.route('/get_commands', methods=['GET'])
def get_commands():
tmp_commands = commands[::]
commands = []
return {'commands': tmp_commands}
#app.route('/send_commands', methods=['POST'])
def send_commands():
if request.json['password'] == "password_here":
commands.append(request.json['command'])
return {'worked': True}
else:
return {'worked': False}
if __name__ == '__main__':
app.run(debug=True)
Client Code:
import requests
URL = "url_here/get_commands"
commands = requests.get(url = URL)
for command in commands:
os.system(command)
I am facing a problem to make my Apache Beam pipeline work on Cloud Dataflow, with DataflowRunner.
The first step of the pipeline is to connect to an external Postgresql server hosted on a VM which is only externally accessible through SSH, port 22, and extract some data. I can't change these firewalling rules, so I can only connect to the DB server via SSH tunneling, aka port-forwarding.
In my code I make use of the python library sshtunnel. It works perfectly when the pipeline is launched from my development computer with DirectRunner:
from sshtunnel import open_tunnel
with open_tunnel(
(user_options.ssh_tunnel_host, user_options.ssh_tunnel_port),
ssh_username=user_options.ssh_tunnel_user,
ssh_password=user_options.ssh_tunnel_password,
remote_bind_address=(user_options.dbhost, user_options.dbport)
) as tunnel:
with beam.Pipeline(options=pipeline_options) as p:
(p | "Read data" >> ReadFromSQL(
host=tunnel.local_bind_host,
port=tunnel.local_bind_port,
username=user_options.dbusername,
password=user_options.dbpassword,
database=user_options.dbname,
wrapper=PostgresWrapper,
query=select_query
)
| "Format CSV" >> DictToCSV(headers)
| "Write CSV" >> WriteToText(user_options.export_location)
)
The same code, launched with DataflowRunner inside a non-default VPC where all ingress are deny but no egress restriction, and CloudNAT configured, fails with this message:
psycopg2.OperationalError: could not connect to server: Connection refused Is the server running on host "0.0.0.0" and accepting TCP/IP connections on port 41697? [while running 'Read data/Read']
So, obviously something is wrong with my tunnel but I cannot spot what exactly. I was beginning to wonder whether a direct SSH tunnel setup was even possible through CloudNAT, until I found this blog post: https://cloud.google.com/blog/products/gcp/guide-to-common-cloud-dataflow-use-case-patterns-part-1 stating:
A core strength of Cloud Dataflow is that you can call external services for data enrichment. For example, you can call a micro service to get additional data for an element.
Within a DoFn, call-out to the service (usually done via HTTP). You have full control to make any type of connection that you choose, so long as the firewall rules you set up within your project/network allow it.
So it should be possible to set up this tunnel ! I don't want to give up but I don't know what to try next. Any idea ?
Thanks for reading
Problem solved ! I can't believe I've spent two full days on this... I was looking completely in the wrong direction.
The issue was not with some Dataflow or GCP networking configuration, and as far as I can tell...
You have full control to make any type of connection that you choose, so long as the firewall rules you set up within your project/network allow it
is true.
The problem was of course in my code : only the problem was revealed only in a distributed environment. I had make the mistake of opening the tunnel from the main pipeline processor, instead of the workers. So the SSH tunnel was up but not between the workers and the target server, only between the main pipeline and the target!
To fix this, I had to change my requesting DoFn to wrap the query execution with the tunnel :
class TunnelledSQLSourceDoFn(sql.SQLSourceDoFn):
"""Wraps SQLSourceDoFn in a ssh tunnel"""
def __init__(self, *args, **kwargs):
self.dbport = kwargs["port"]
self.dbhost = kwargs["host"]
self.args = args
self.kwargs = kwargs
super().__init__(*args, **kwargs)
def process(self, query, *args, **kwargs):
# Remote side of the SSH Tunnel
remote_address = (self.dbhost, self.dbport)
ssh_tunnel = (self.kwargs['ssh_host'], self.kwargs['ssh_port'])
with open_tunnel(
ssh_tunnel,
ssh_username=self.kwargs["ssh_user"],
ssh_password=self.kwargs["ssh_password"],
remote_bind_address=remote_address,
set_keepalive=10.0
) as tunnel:
forwarded_port = tunnel.local_bind_port
self.kwargs["port"] = forwarded_port
source = sql.SQLSource(*self.args, **self.kwargs)
sql.SQLSouceInput._build_value(source, source.runtime_params)
logging.info("Processing - {}".format(query))
for records, schema in source.client.read(query):
for row in records:
yield source.client.row_as_dict(row, schema)
as you can see, I had to override some bits of pysql_beam library.
Finally, each worker open its own tunnel for each request. It's probably possible to optimize this behavior but it's enough for my needs.
I have a simple Cherrypy web application, including two classes. The init code looks like this:
c = MyClass()
c.updates = AnotherClass()
app = cherrypy.tree.mount(c, '/', 'myapp.config')
c.setConfig(app.config)
c.updates.setConfig(app.config)
cherrypy.engine.start()
cherrypy.engine.block()
The setConfig method for both classes is just a line of code to store some database configuration:
def setConfig(self, conf):
self.config = conf['Database']
The configuration file myapp.config looks like this:
[global]
server.socket_host = "0.0.0.0"
server.socket_port = 80
[/]
tools.staticdir.root = com.stuff.myapp.rootDir + '/html'
[Database]
dbtable: "mydbtable"
username: "user"
password: "pass"
When I start the lot, the application gets the database config data, and correctly serves static files from the /html directory, but it only listens on localhost on 8080. I get this on the console:
[11/Apr/2013:10:03:58] ENGINE Bus STARTING
[11/Apr/2013:10:03:58] ENGINE Started monitor thread 'Autoreloader'.
[11/Apr/2013:10:03:58] ENGINE Started monitor thread '_TimeoutMonitor'.
[11/Apr/2013:10:03:58] ENGINE Serving on 127.0.0.1:8080
[11/Apr/2013:10:03:58] ENGINE Bus STARTED
I definitely must have done something wrong. It's as if the global part of the configuration doesn't get applied. How can I fix it?
I think I figured out how to solve it. I added this line:
cherrypy.config.update('myapp.config')
after the line that says
app = cherrypy.tree.mount(c, '/', 'myapp.config')
I think the reason why my classes were getting the Database configuration is that I pass it manually with the setConfig() calls. This passes the application configuration only, not the global configuration. The mount() call apparently doesn't propagate the configuration data to the objects it mounts, as I thought it would do.
Furthermore, the update() call must be after the mount() call, or an exception is raised.
I'm not sure whether this is the best way to organize this code. This works for now, but better ideas are always welcome.
I want to make a task use a different set of hosts (role) depending on which network I'm currently in. If I'm in the same network of my servers, I don't need to go through the gateway.
Here's a snippet from my fabfile.py:
env.use_ssh_config = True
env.roledefs = {
'rack_machines': ['rack4', 'rack5', 'rack6', 'rack7'],
'external_rack_machines': ['erack4', 'erack5', 'erack6', 'erack7']
}
#roles('rack_machines')
def host_type():
run('uname -s')
So, for my task host_type(), I'd like its role to be rack_machines if I'm in the same network as rack4, rack5, etc. Otherwise, I'd like its role to be external_rack_machines, therefore going through the gateway to access those same machines.
Maybe there's a way to do this with ssh config alone. Here's a snippet of my ssh_config file as well:
Host erack4
HostName company-gw.foo.bar.com
Port 2261
User my_user
Host rack4
HostName 10.43.21.61
Port 22
User my_user
Role definitions are taken into account after module has been imported. So you can place some code in your fabfile which executes on import, detects network and set appropriate roledefs.
Second way to achieve a goal is to use "flag-task". This is a task which does nothing but set appropriate roledefs. I.e.:
hosts = {
"rack": ["rack1", "rack2"],
"external_rack": ["external_rack1", "external_rack2"]
}
env.roledefs = {"rack_machines": hosts["rack"]}
#task
def set_hosts(hostset="rack"):
if hostset in hosts:
env.roledefs["rack_machines"] = hosts[hostset]
else:
print "Invalid hostset"
#roles("rack_machines")
def business():
pass
And invoke that way: fab set_hosts:external_rack business