pywinauto - TimeComX Basic print_control_identifiers() doesn't show all the options - python

I want to have automate process for this program: TimeComX Basic.
The script i wrote:
from pywinauto.application import Application as PyWinAutoApplication
from pywinauto.timings import wait_until
from pywinauto.keyboard import send_keys
import pywinauto
import os
import sys
from pywinauto import mouse
import traceback
#Hidernate pc
app2 = PyWinAutoApplication(backend="uia").connect(found_index=0,title="TimeComX Basic")
handle = pywinauto.findwindows.find_windows(title="TimeComX Basic")[0]
window = app2.window(handle=handle)
window.maximize()
window.set_focus()
app2.TimeComxBasic.print_control_identifiers()
#mouse.click(button='left', coords=(150, 960))
Note that to run this script you have to manually install and open TimeComX Basic.
The output:
Control Identifiers:
Dialog - 'TimeComX Basic' (L-11, T-11, R1931, B1019)
['TimeComX BasicDialog', 'Dialog', 'TimeComX Basic']
child_window(title="TimeComX Basic", control_type="Window")
|
| TitleBar - '' (L24, T-8, R1920, B34)
| ['TitleBar']
| |
| | Menu - 'System' (L0, T0, R22, B22)
| | ['Menu', 'System', 'SystemMenu', 'System0', 'System1']
| | child_window(title="System", auto_id="MenuBar", control_type="MenuBar")
| | |
| | | MenuItem - 'System' (L0, T0, R22, B22)
| | | ['MenuItem', 'System2', 'SystemMenuItem']
| | | child_window(title="System", control_type="MenuItem")
| |
| | Button - 'Minimize' (L1707, T0, R1778, B33)
| | ['MinimizeButton', 'Button', 'Minimize', 'Button0', 'Button1']
| | child_window(title="Minimize", control_type="Button")
| |
| | Button - 'Restore' (L1778, T0, R1848, B33)
| | ['Restore', 'Button2', 'RestoreButton']
| | child_window(title="Restore", control_type="Button")
| |
| | Button - 'Close' (L1848, T0, R1920, B33)
| | ['Close', 'Button3', 'CloseButton']
| | child_window(title="Close", control_type="Button")
As you can see it has options only for close, minimize and maximize buttons and for main menu. There is no option to "Start" button for example.
What can I do in this situation?

Related

How to solve dist.init_process_group from hanging (or deadlocks)?

I was to set up DDP (distributed data parallel) on a DGX A100 but it doesn't work. Whenever I try to run it simply hangs. My code is super simple just spawning 4 processes for 4 gpus (for the sake of debugging I simply destroy the group immediately but it doesn't even reach there):
def find_free_port():
""" https://stackoverflow.com/questions/1365265/on-localhost-how-do-i-pick-a-free-port-number """
import socket
from contextlib import closing
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return str(s.getsockname()[1])
def setup_process(rank, world_size, backend='gloo'):
"""
Initialize the distributed environment (for each process).
gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that
it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.
export NCCL_SOCKET_IFNAME=eth0
export NCCL_IB_DISABLE=1
https://stackoverflow.com/questions/61075390/about-pytorch-nccl-error-unhandled-system-error-nccl-version-2-4-8
https://pytorch.org/docs/stable/distributed.html#common-environment-variables
"""
if rank != -1: # -1 rank indicates serial code
print(f'setting up rank={rank} (with world_size={world_size})')
# MASTER_ADDR = 'localhost'
MASTER_ADDR = '127.0.0.1'
MASTER_PORT = find_free_port()
# set up the master's ip address so this child process can coordinate
os.environ['MASTER_ADDR'] = MASTER_ADDR
print(f"{MASTER_ADDR=}")
os.environ['MASTER_PORT'] = MASTER_PORT
print(f"{MASTER_PORT}")
# - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends
if torch.cuda.is_available():
# unsure if this is really needed
# os.environ['NCCL_SOCKET_IFNAME'] = 'eth0'
# os.environ['NCCL_IB_DISABLE'] = '1'
backend = 'nccl'
print(f'{backend=}')
# Initializes the default distributed process group, and this will also initialize the distributed package.
dist.init_process_group(backend, rank=rank, world_size=world_size)
# dist.init_process_group(backend, rank=rank, world_size=world_size)
# dist.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)
print(f'--> done setting up rank={rank}')
dist.destroy_process_group()
mp.spawn(setup_process, args=(4,), world_size=4)
why is this hanging?
nvidia-smi output:
$ nvidia-smi
Fri Mar 5 12:47:17 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.102.04 Driver Version: 450.102.04 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 26C P0 51W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:0F:00.0 Off | 0 |
| N/A 25C P0 52W / 400W | 3MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:47:00.0 Off | 0 |
| N/A 25C P0 51W / 400W | 3MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:4E:00.0 Off | 0 |
| N/A 25C P0 51W / 400W | 3MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 4 A100-SXM4-40GB On | 00000000:87:00.0 Off | 0 |
| N/A 30C P0 52W / 400W | 3MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 5 A100-SXM4-40GB On | 00000000:90:00.0 Off | 0 |
| N/A 29C P0 53W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 6 A100-SXM4-40GB On | 00000000:B7:00.0 Off | 0 |
| N/A 29C P0 52W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 7 A100-SXM4-40GB On | 00000000:BD:00.0 Off | 0 |
| N/A 48C P0 231W / 400W | 7500MiB / 40537MiB | 99% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 7 N/A N/A 147243 C python 7497MiB |
+-----------------------------------------------------------------------------+
How do I set up ddp in this new machine?
Update
btw I've successfully installed APEX because some other links say to do that but it still fails. For I did:
went to: https://github.com/NVIDIA/apex follwed their instructions
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
but before the above I had to update gcc:
conda install -c psi4 gcc-5
it did install it as I successfully imported it but it didn't help.
Now it actually prints an error msg:
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
KeyboardInterrupt
Process SpawnProcess-3:
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/miranda9/ML4Coq/ml4coq-proj/embeddings_zoo/tree_nns/main_brando.py", line 252, in train
setup_process(rank, world_size=opts.world_size)
File "/home/miranda9/ML4Coq/ml4coq-proj/embeddings_zoo/distributed.py", line 85, in setup_process
dist.init_process_group(backend, rank=rank, world_size=world_size)
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 436, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 179, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: connect() timed out.
During handling of the above exception, another exception occurred:
related:
https://github.com/pytorch/pytorch/issues/9696
https://discuss.pytorch.org/t/dist-init-process-group-hangs-silently/55347/2
https://forums.developer.nvidia.com/t/imagenet-hang-on-dgx-1-when-using-multiple-gpus/61919
apex suggestion: https://discourse.mozilla.org/t/hangs-on-dist-init-process-group-in-distribute-py/44686
https://github.com/pytorch/pytorch/issues/15638
https://github.com/pytorch/pytorch/issues/53395
The following fixes are based on Writing Distributed Applications with PyTorch, Initialization Methods.
Issue 1:
It will hang unless you pass in nprocs=world_size to mp.spawn(). In other words, it's waiting for the "whole world" to show up, process-wise.
Issue 2:
The MASTER_ADDR and MASTER_PORT need to be the same in each process' environment and need to be a free address:port combination on the machine where the process with rank 0 will be run.
Both of these are implied or directly read from the following quote from the link above (emphasis added):
Environment Variable
We have been using the environment variable initialization method
throughout this tutorial. By setting the following four environment
variables on all machines, all processes will be able to properly
connect to the master, obtain information about the other processes,
and finally handshake with them.
MASTER_PORT: A free port on the machine that will host the process with rank 0.
MASTER_ADDR: IP address of the machine that will host the process with rank 0.
WORLD_SIZE: The total number of processes, so that the master knows how many workers to wait for.
RANK: Rank of each process, so they will know whether it is the master of a worker.
Here's some code to demonstrate both of those in action:
import torch
import torch.multiprocessing as mp
import torch.distributed as dist
import os
def find_free_port():
""" https://stackoverflow.com/questions/1365265/on-localhost-how-do-i-pick-a-free-port-number """
import socket
from contextlib import closing
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return str(s.getsockname()[1])
def setup_process(rank, master_addr, master_port, world_size, backend='gloo'):
print(f'setting up {rank=} {world_size=} {backend=}')
# set up the master's ip address so this child process can coordinate
os.environ['MASTER_ADDR'] = master_addr
os.environ['MASTER_PORT'] = master_port
print(f"{master_addr=} {master_port=}")
# Initializes the default distributed process group, and this will also initialize the distributed package.
dist.init_process_group(backend, rank=rank, world_size=world_size)
print(f"{rank=} init complete")
dist.destroy_process_group()
print(f"{rank=} destroy complete")
if __name__ == '__main__':
world_size = 4
master_addr = '127.0.0.1'
master_port = find_free_port()
mp.spawn(setup_process, args=(master_addr,master_port,world_size,), nprocs=world_size)

Is there a different way that I could display my tables based on my arguments entered using the argparse module?

The goal is to display tables based on arguments entered in the terminal.
I've tried to create a function that would output each individual table using if, elif and else statements but that would only display the tables individually.
I've also tried a different way which is preferred and it would add a column to my table based on my arguments.
def generate_table(inventory):
args = arguments()
data = generate_data(inventory)
main_headers = ['os_version', 'serial_number']
lldp_headers = ['lldp']
out_file = ['outfile']
main_header = []
lldp_header = []
main_table_header = PrettyTable()
lldp_table_header = PrettyTable()
for arg in vars(args):
if arg in main_headers and getattr(args, arg):
main_header.append(arg)
elif arg in lldp_headers and getattr(args, arg):
lldp_header.append(arg)
elif arg in out_file and getattr(args, arg):
out_file.append(arg)
output_file(inventory)
main_header.insert(0, 'Hostname')
main_table_header.field_names = main_header
lldp_table_header.field_names = ['Hostname', 'Neighbor', 'Local Interface', 'Neighbor Interface']
for hostname, details in data.items():
row = [hostname]
for column in main_table_header.field_names[1:]:
row.append(details[column])
main_table_header.add_row(row)
for lldp_data in details['lldp']:
neighbor = lldp_data['device-id']
local_int = lldp_data['local-interface']
neigh_int = lldp_data['connecting-interface']
lldp_table_header.add_row([hostname, neighbor, local_int, neigh_int])
print(main_table_header)
print(lldp_table_header)
def arguments():
parser = argparse.ArgumentParser(description='Argparse for Training Course.')
parser.add_argument('-s', '--serial_number', action='store_true', help='Device Serial Numbers')
parser.add_argument('-v', '--os_version', action='store_true', help='Output Devices OS')
parser.add_argument('--lldp', action='store_true', help='Output LLDP Data')
parser.add_argument('--outfile', action='store_true', help='Output to file')
parser.add_argument('--inventory', help='Inventory File', default=["inventory.yml"], required=True)
args = parser.parse_args()
return args
def get_inventory(inventory):
with open(inventory) as fh:
yml_file = yaml.load(fh)
return yml_file
def main():
args = arguments()
if not os.path.isfile(args.inventory):
sys.exit('Please specify valid, readable YAML file with data')
inventory = get_inventory(args.inventory)
generate_table(inventory)
if __name__ == '__main__':
main()
YAML FILE:
csr1:
username: admin
password: pass
transport: restconf
csr2:
username: admin
password: pass
transport: restconf
This is what I expect:
python3 rest5.py --inventory inventory.yml -v
+----------+------------+
| Hostname | os_version |
+----------+------------+
| csr1 | 16.6 |
| csr2 | 16.6 |
+----------+------------+
python3 rest5.py --inventory inventory.yml -s
+----------+---------------+
| Hostname | serial_number |
+----------+---------------+
| csr1 | 9KIBQAQ3OPE |
| csr2 | 9KIBQAQ3OPE |
+----------+---------------+
python3 rest5.py --inventory inventory.yml -s -v
+----------+---------------+------------+
| Hostname | serial_number | os_version |
+----------+---------------+------------+
| csr1 | 9KIBQAQ3OPE | 16.6 |
| csr2 | 9KIBQAQ3OPE | 16.6 |
+----------+---------------+------------+
python3 rest5.py --inventory inventory.yml --lldp
+----------+--------------+-----------------+--------------------+
| Hostname | Neighbor | Local Interface | Neighbor Interface |
+----------+--------------+-----------------+--------------------+
| csr1 | csr2.com | Gi1 | Gi1 |
| csr2 | csr1.com | Gi1 | Gi1 |
+----------+--------------+-----------------+--------------------+
python3 rest5.py --inventory inventory.yml --lldp -s -v
+----------+---------------+------------+
| Hostname | serial_number | os_version |
+----------+---------------+------------+
| csr1 | 9KIBQAQ3OPE | 16.6 |
| csr2 | 9KIBQAQ3OPE | 16.6 |
+----------+---------------+------------+
+----------+--------------+-----------------+--------------------+
| Hostname | Neighbor | Local Interface | Neighbor Interface |
+----------+--------------+-----------------+--------------------+
| csr1 | csr2.com | Gi1 | Gi1 |
| csr2 | csr1.com | Gi1 | Gi1 |
+----------+--------------+-----------------+--------------------+
The actual output:
python3 rest5.py --inventory inventory.yml -s
+----------+---------------+
| Hostname | serial_number |
+----------+---------------+
| csr1 | 9KIBQAQ3OPE |
| csr2 | 9KIBQAQ3OPE |
+----------+---------------+
+----------+--------------+-----------------+--------------------+
| Hostname | Neighbor | Local Interface | Neighbor Interface |
+----------+--------------+-----------------+--------------------+
| csr1 | csr2.com | Gi1 | Gi1 |
| csr2 | csr1.com | Gi1 | Gi1 |
+----------+--------------+-----------------+--------------------+
python3 rest5.py --inventory inventory.yml --lldp
+----------+
| Hostname |
+----------+
| csr1 |
| csr2 |
+----------+
+----------+--------------+-----------------+--------------------+
| Hostname | Neighbor | Local Interface | Neighbor Interface |
+----------+--------------+-----------------+--------------------+
| csr1 | csr2.com | Gi1 | Gi1 |
| csr2 | csr1.com | Gi1 | Gi1 |
+----------+--------------+-----------------+--------------------+
Your generate_table method is always printing out two tables, when you only want it to print out one:
Your original function:
def generate_table(inventory):
args = arguments()
...
print(main_table_header)
print(lldp_table_header)
Should simply change to:
def generate_table(inventory):
args = arguments()
...
if args.lldp:
print(lldp_table_header)
else:
print(main_table_header)
Other commenters mentioned optimizations that would generally make the code better which you should consider implementing, such as:
only generating arguments once
create only the tables you're going to render instead of running through the motions of creating tables that you aren't going to print in the end
But at the end of the day, you were just a few lines away from making your use cases stated above work the way you wanted them to.
def main():
args = arguments()
if not os.path.isfile(args.inventory):
sys.exit('Please specify valid, readable YAML file with data')
inventory = get_inventory(args.inventory)
generate_table(inventory, args)
You call get_inventory with a string, a values from args. You should also call generate_table with values from args, or args itself. Reevaluating args works, but just makes your code messier.
def generate_table(inventory, args):
# args = arguments() # no need to reevaluate args
data = generate_data(inventory)
...
Same could be done for output_file, though it isn't obvious where you are using args.
In generate_table you appear to use args mainly in:
for arg in vars(args):
if arg in main_headers and getattr(args, arg):
main_header.append(arg)
elif arg in lldp_headers and getattr(args, arg):
lldp_header.append(arg)
elif arg in out_file and getattr(args, arg):
out_file.append(arg)
output_file(inventory)
That's an obscure piece of code, treating args both as Namespace and dictionary. I think it's just checking on the values of
args.os_version
args.serial_number
args.lldp
args.outfile
Those are all store_true, so they will always be present, and a True/False value. So you could
if args.out_file:
output_file(inventory)
if args.lldp:
lldp_header.append('lldp')
But I'm not too interested in digging through all the logic steps.
Make sure that you understand what parse_args has produced. During debugging I encourage users to
print(args)
That way there'll be fewer surprises.

How to click using pywinauto

I would like to use pywinauto to control an image processing software.
First, I need to click a specific area (which is used for image dragging) to pop up a windows for path input. See the first figure.
Then, I need to input a path and click the button "Select Folder". See the second figure.
I tried:
from pywinauto import Desktop, Application, mouse, findwindows
from pywinauto.keyboard import SendKeys
app = Application(backend='uia').start(r"C:\Program Files\Duplicate Photo Cleaner\DuplicatePhotoCleaner.exe")
app.connect(path="DuplicatePhotoCleaner.exe")
app.DuplicatePhotoCleaner.print_control_identifiers()
Control Identifiers:
Dialog - 'Duplicate Photo Cleaner' (L440, T126, R1480, B915)
['Duplicate Photo Cleaner', 'Duplicate Photo CleanerDialog', 'Dialog']
child_window(title="Duplicate Photo Cleaner", control_type="Window")
|
| TitleBar - '' (L464, T129, R1472, B157)
| ['', 'TitleBar']
| |
| | Menu - 'System' (L448, T134, R470, B156)
| | ['System', 'Menu', 'SystemMenu', 'System0', 'System1']
| | child_window(title="System", auto_id="MenuBar", control_type="MenuBar")
| | |
| | | MenuItem - 'System' (L448, T134, R470, B156)
| | | ['System2', 'SystemMenuItem', 'MenuItem']
| | | child_window(title="System", control_type="MenuItem")
| |
| | Button - 'Minimize' (L1333, T127, R1380, B157)
| | ['Minimize', 'Button', 'MinimizeButton', 'Button0', 'Button1']
| | child_window(title="Minimize", control_type="Button")
| |
| | Button - 'Maximize' (L1380, T127, R1426, B157)
| | ['Button2', 'Maximize', 'MaximizeButton']
| | child_window(title="Maximize", control_type="Button")
| |
| | Button - 'Close' (L1426, T127, R1473, B157)
| | ['CloseButton', 'Button3', 'Close']
| | child_window(title="Close", control_type="Button")
Can anyone help?
Thank you very much.
Looks like the + button where you need to click to get the window (shown in second figure) is ownerdrawn.
So, there is only one way to bring up the "Add folder to search" window: use click_input method by passing coordinates.
Once the window comes up, you can use the below code to set the value:
app.DuplicatePhotoCleaner.child_window(title="Folder:", auto_id="1152", control_type="Edit").set_text('Hello world') #or
app.DuplicatePhotoCleaner['Folder:Edit'].set_text('Hello world')
Application().connect(title='Add folder to search')...
Please go though pywinauto docs for further info.

Tornado long polling requests

Below is the most simple example of my issue:
When a request is made it will print Request via GET <__main__.MainHandler object at 0x104041e10> and then the request will remain open. Good! However, when you make another request it does not call the MainHandler.get method until the first connection has finished.
How can I get multiple requests into the get method while having them remain long-polling. I'm passing arguments with each request that will get different results from a pub/sub via redis. Issue is that I only get one connection in at a time. Whats wrong? And why is this blocking other requests?
import tornado.ioloop
import tornado.web
import os
class MainHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(self):
print 'Request via GET', self
if __name__ == '__main__':
application = tornado.web.Application([
(r"/", MainHandler)])
try:
application.listen(int(os.environ.get('PORT', 5000)))
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
Diagram Left: As described in issue above. The requests are not handled in the fashion requested in right diagram.
Diagram on the right I need the requests (a-d) to be handled by the RequestHandler and then wait for the pub/sub to announce their data.
a b c d
+ + + + ++ a b c d
| | | | || + + + +
| | | | || | | | |
| | | | || | | | |
| | | | || | | | |
| v v v || | | | |
+---|-----------------------------+ || +-----|----|---|---|------------------+
| | | || | | | | | |
| + RequestHandler| || | + + + + RequestHan. |
| | | || | | | | | |
+---|-----------------------------+ || +-----|----|---|---|------------------+
+---|-----------------------------+ || +-----|----|---|---|------------------+
| | | || | | | | | |
| + Sub/Pub Que | || | v + v v Que |
| | | || | | |
+---|-----------------------------+ || +----------|--------------------------+
+---|-----------------------------+ || +----------|--------------------------+
| || |
| Finished || | Finished
v || v
||
||
||
||
||
||
||
++
If this is accomplishable with another programming language please let me know.
Thank you for your help!
From http://www.tornadoweb.org/en/stable/web.html#tornado.web.asynchronous:
tornado.web.asynchronous(method)
...
If this decorator is given, the response is not finished when the
method returns. It is up to the request handler to call self.finish()
to finish the HTTP request. Without this decorator, the request is
automatically finished when the get() or post() method returns.
You have to finish get method explicitly:
import tornado.ioloop
import tornado.web
import tornado.options
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class MainHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(self):
print 'Request via GET', self
self.finish()
if __name__ == '__main__':
application = tornado.web.Application([
(r"/", MainHandler)])
try:
application.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()

PyGTK Spacing in an HBox

I'm new to GTK, I'm trying to figure out how to accomplish something like this:
+---+------+---+
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
+---+------+---+
I want this done in an HBox. How would I accomplish this? Thanks.
It is done with "packing".
I always keep the class reference under my pillow : http://www.pygtk.org/docs/pygtk/gtk-class-reference.html
Samples in the good tutorial found here :
http://www.pygtk.org/pygtk2tutorial/sec-DetailsOfBoxes.html
And finally, this shows up something like your drawing :
import gtk as g
win = g.Window ()
win.set_default_size(600, 400)
win.set_position(g.WIN_POS_CENTER)
win.connect ('delete_event', g.main_quit)
hBox = g.HBox()
win.add (hBox)
f1 = g.Frame()
f2 = g.Frame()
f3 = g.Frame()
hBox.pack_start(f1)
hBox.pack_start(f2)
hBox.pack_start(f3)
win.show_all ()
g.main ()
Have fun ! (and I hope my answer is helpful)
The answer is pack_start() and pack_end()
The function has a few parameters you can send to it that give you the desired effect
If you use Louis' example:
hBox.pack_start(f1, expand =False, fill=False)
hBox.pack_start( f2, expand=True, fill=True, padding=50)
hBox.pack_end(f3, expand=False, fill=False)
Hope that helps!

Categories

Resources