Upload file using telegram-upload in flask? - python

We can upload file using telegram-upload library by using the following command on terminal
telegram-upload file1.mp4 /path/to/file2.mkv
But if I want to call this inside python function, How should I do it. I mean in a python function if users passes the file path as an argument, then that function should be able to upload the file to telegram server.It is not mentioned in the documentation.
In other words I want to ask how to execute or run shell commands from inside python function?

For telegram-upload you can use upload method in telegram_upload.management and
for telegram-download use download method in the same file.
Or you can see how they are implemented there.
from telegram_upload.client import Client
from telegram_upload.config import default_config, CONFIG_FILE
from telegram_upload.exceptions import catch
from telegram_upload.files import NoDirectoriesFiles, RecursiveFiles
DIRECTORY_MODES = {
'fail': NoDirectoriesFiles,
'recursive': RecursiveFiles,
}
def upload(files, to, config, delete_on_success, print_file_id, force_file, forward, caption, directories,
no_thumbnail):
"""Upload one or more files to Telegram using your personal account.
The maximum file size is 1.5 GiB and by default they will be saved in
your saved messages.
"""
client = Client(config or default_config())
client.start()
files = DIRECTORY_MODES[directories](files)
if directories == 'fail':
# Validate now
files = list(files)
client.send_files(to, files, delete_on_success, print_file_id, force_file, forward, caption, no_thumbnail)

I found the solution.Using os module we can run command line strings inside python function i.e. os.system('telegram-upload file1.mp4 /path/to/file2.mkv')

Related

jupyter notebook: NameError: name 'c' is not defined

Every time that i try to launch my notebook im getting the error below .
let's specify that im new worker on the project and the file config.py was created before that i joined the team.
Does anyone knows how to resolve it please?
The code actually done is
Requirements.txt
psycopg2==2.7.3.2.
SQLAlchemy==1.2.2
pandas==0.21.0
docker==3.3.0
python-json-logger
sshtunnel==0.1.4
jupyter
jupytext==1.2
geopy==2.2.0
errror detail
~/SG/notebooks/config.py in <module>
1 # Using jupytext
----> 2 c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager"
3 c.ContentsManager.default_jupytext_formats = "ipynb,py"
NameError: name 'c' is not defined
code
the row causing the error in the notebook is
from src.util.connect_postgres import postgres_connexion
the content of the file connect_postgres
from sqlalchemy import create_engine
from config.util.database import TARGET_TEST_HOST, TARGET_PROD_HOST, \
TARGET_TEST_DB, TARGET_PROD_DB, TARGET_TEST_USER, TARGET_PROD_USER, SG_PROD_USER, SG_PROD_HOST
from config.secrets.passwords import TARGET_PROD_PWD, TARGET_TEST_PWD, SG_PROD_PWD
from sshtunnel import SSHTunnelForwarder
import psycopg2
def _create_engine_psg(user, db, host, port, pwd):
""" Returns a connection object to PostgreSQL """
url = build_postgres_url(db, host, port, pwd, user)
return create_engine(url, client_encoding='utf8')
def build_postgres_url(db, host, port, pwd, user):
url = 'postgresql://{}:{}#{}:{}/{}'.format(user, pwd, host, port, db)
return url
def postgres_connexion(env):
if env == 'prod':
return create_engine_psg_with_tunnel_ssh(TARGET_PROD_DB,
TARGET_PROD_USER, TARGET_PROD_PWD, SG_PROD_PWD,
SG_PROD_USER,
SG_PROD_HOST, TARGET_PROD_HOST)
else:
raise ValueError("'env' parameter must be 'prod'.")
config.py
c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager"
c.ContentsManager.default_jupytext_formats = "ipynb,py"
I red that i can generate the file and then edit it.
when i tried to create the jupyter_notebook_config it is always in my personal directory of marczhr
/Users/marczhr/.jupyter/jupyter_notebook_config.py
but i want to be done in my something that i can push on git.
Hope that im clear ^^
Thank you,
Don't run the notebook from the directory with the configuration file.
The reason is that there is an import with a config module or package in the code listed. By launching the notebook from the directory with the configuration file, it will import that Jupyter configuration file, instead of the correct package or module, with the resulting error.
Instead, run it from somewhere else, or put the configuration file elsewhere.
Or perhaps best, take the two configuration lines and add them to the end of your /Users/marczhr/.jupyter/jupyter_notebook_config.py file, then remove the 2-3 line config.py file.
In the latter case, you can now launch the notebook server from anywhere, and you don't need to specify any configuration file, since Jupyter will automatically use the generated (with added lines) one.
If you want to keep the config.py file, then launch the Jupyter notebook server from another directory, and simply specify the full path, like
jupyter --config=$HOME/SG/notebooks/config.py
All in all, this is a classic nameclash upon import, because of identically named files/directories. Always be wary of that.
(I've commented on some other potential problems in the comments: that still stands, but is irrelevant to the current problem here.)

How to use a config file to run a python file multiple times with different settings?

I have one python script with it's settings in one seperate json config file. The json file looks like this:
{
"connection" : {
"db_server" : "server",
"db_name" : "table1",
"db_user" : "user1",
}}
Now I need to run the same python file more than one time, each with other settings in the config file. The other settings would look like this:
{
"connection" : {
"db_server" : "server",
"db_name" : "table2",
"db_user" : "user2",
}}
I don't need to change anything in the Python script. I open the json file in my Python script like this:
with open('settings.json') as json_data_file:
data = json.load(json_data_file)
json_data_file.close()
Since you cannot add comments in a json file, I don't know the easiest way to do this. I want the Python script to run simultaneously two times, each time with other settings for the json file.
Thanks in advance!
After launching the the python script, you can just modify the config file inplace and run it a second time. This won't affect the already-running python program because they already read the config.
Or you can have multiple config files with different names, and run the script with some command line argument (i.e. sys.argv[1]) to choose which config file to use. I personally recommend this approach.
A simple solution is a new script that parses your JSON file, imports your Python scripts, and then executes that scripts with different parameters using concurrent.
An example (adapted from the example for ThreadPoolExecutor):
import concurrent.futures
import json
from YourModule import MainFunction
# First, open and load your JSON file
parameter_dict = json.load(open('yourfilename.json'))
# Do some parsing to your parameters in order
# (In your case, servers, tables, and users)
parameters_to_submit = [[entry['db_server'], entry['db_table'], entry['db_user'] for entry in parameter_dict.values()]
# Now, generate a ThreadPool to run your script multiple times
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
# Submit the function + parameters to the executor
submitted_runs = {
executor.submit(MainFunction, params[0], params[1], params[2]): params
for params in parameters_to_submit
}
# as the results come in, print the results
for future in concurrent.futures.as_completed(submitted_runs):
url = submitted_runs[future]
try:
data = future.result()
except Exception as exc:
print(f'Script generated an exception: {exc}')
else:
# if need be, you could also write this data to a file here
print(f'Produced result {data}')

Python fabric calling script "remote path"

I'm using fabric to connect to remote host, when i'm there, I try to call a script that I made (It parses the file I give in argument). But when I call the script from inside my Fabfile.py, it assumes the path I gave is from the machine I launch the fabfile from (so not my remote host)
In my fabfile.py I have:
Import import servclasse
env.host='host1'
def listconf():
#here I browes to the correct folder
s=servclasse.Server("my.file") #this is where I want it to open the host1:my.file file and instanciate a classe from what it parsed
If i do this, it tries to open the file from the folder where servclass.py is. Is there a way to give a "remote path" in argument? I would rather not downloading the file.
Should I upload the script servclasse.py with the operation.put before calling it?
Edit: more info
In my servclasse I have this:
def __init__(self, path):
self.config = ConfigParser.ConfigParser(allow_no_value=True)
self.config.readfp(open(path))
The function open() was the problem.
I figured out how to do it so i'll drop it here in case someone read this topic one day :
def listconf():
#first I browes to the correct folder then
contents = StringIO.StringIO()
get("MyFile",contents)
contents.seek(0)
s=Server(contents)
and in the servclass.py
def __init__(self, objfile):
self.config = ConfigParser.ConfigParser(allow_no_value=True)
self.config.readfp(objfile)
#and i do my stuffs

AWS Lambda package deployment

I'm trying to deploy a python .zip package as an AWS Lambda
I choose the hello-python Footprint.
I created the 1st lambda with the inline code, after that I tried to change to upload from a development .zip.
The package I used is a .zip contains a single file called hello_python.py with the same code as the default inline code sample, which is shown below:
from __future__ import print_function
import json
print('Loading function')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
print("value1 = " + event['key1'])
print("value2 = " + event['key2'])
print("value3 = " + event['key3'])
return event['key1'] # Echo back the first key value
#raise Exception('Something went wrong')
After I click "save and test", nothing happens, but I get this weird red ribbon, but no other substantive error messages. The logs and the run results do not exhibit any change if modifying to source, repackaging and uploading it again.
Lambda functions requires a handler in the format <FILE-NAME-NO-EXTENSION>.<FUNCTION-NAME>. In your case the handler is set to lambda_function.lambda_handler, which is the default value assigned by AWS Lambda). However, you've named your file hello_python.py. Therefore, AWS Lambda is looking for a python file named lambda_function.py and finding nothing.
To fix this either:
Rename your hello_python.py file to lambda_function.py
Modify your lambda function handler to be hello_python.lambda_handler
You can see an example of how this works in the documentation where they create a python function called my_handler() inside the file hello_python.py, and they create a lambda function to call it with the handler hello_python.my_handler.

cannot access webserver resources using virtualenv and webapp2

I wanted to create a simple app using webapp2. Because I have Google App Engine installed, and I want to use it outside of GAE, I followed the instructions on this page: http://webapp-improved.appspot.com/tutorials/quickstart.nogae.html
This all went well, my main.py is running, it is handling requests correctly. However, I can't access resources directly.
http://localhost:8080/myimage.jpg or http://localhost:8080/mydata.json
always returns a 404 resource not found page.
It doesn't matter if I put the resources on the WebServer/Documents/ or in the folder where the virtualenv is active.
Please help! :-)
(I am on a Mac 10.6 with Python 2.7)
(Adapted from this question)
Looks like webapp2 doesn't have a static file handler; you'll have to roll your own. Here's a simple one:
import mimetypes
class StaticFileHandler(webapp2.RequestHandler):
def get(self, path):
# edit the next line to change the static files directory
abs_path = os.path.join(os.path.dirname(__file__), path)
try:
f = open(abs_path, 'r')
self.response.headers.add_header('Content-Type', mimetypes.guess_type(abs_path)[0])
self.response.out.write(f.read())
f.close()
except IOError: # file doesn't exist
self.response.set_status(404)
And in your app object, add a route for StaticFileHandler:
app = webapp2.WSGIApplication([('/', MainHandler), # or whatever it's called
(r'/static/(.+)', StaticFileHandler), # add this
# other routes
])
Now http://localhost:8080/static/mydata.json (say) will load mydata.json.
Keep in mind that this code is a potential security risk: It allows any visitors to your website to read everything in your static directory. For this reason, you should keep all your static files to a directory that doesn't contain anything you'd like to restrict access to (e.g. the source code).

Categories

Resources