I've got an aiohttp web app that uses routes as flask like decorators and gunicorn
I'm having some trouble getting the logs to work correctly though.
What am I missing here?
No error's are being thrown or logged and the app run smoothly, but nothing is being logged apart from the start-up logs:
[2018-10-16 09:41:18 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2018-10-16 09:41:18 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2018-10-16 09:41:18 +0000] [1] [INFO] Using worker: aiohttp.worker.GunicornWebWorker
[2018-10-16 09:41:18 +0000] [16] [INFO] Booting worker with pid: 16
[2018-10-16 09:41:18 +0000] [17] [INFO] Booting worker with pid: 17
[2018-10-16 09:41:18 +0000] [18] [INFO] Booting worker with pid: 18
[2018-10-16 09:41:18 +0000] [19] [INFO] Booting worker with pid: 19
[2018-10-16 09:41:18 +0000] [20] [INFO] Booting worker with pid: 20
[2018-10-16 09:41:18 +0000] [21] [INFO] Booting worker with pid: 21
[2018-10-16 09:41:18 +0000] [22] [INFO] Booting worker with pid: 22
[2018-10-16 09:41:18 +0000] [23] [INFO] Booting worker with pid: 23
My app/init.py file is as such:
import logging
import os
from logging import handlers
from aiohttp.web import Application
from app.routes import routes
from utils.logging import CloggerFormatter
def create_app(app_config):
app = Application()
logger = logging.getLogger('aiohttp.web')
log_level = logging.DEBUG
if os.environ.get('LOG_LEVEL'):
log_level = os.environ['LOG_LEVEL']
app.router.add_routes(routes)
logger.setLevel(log_level)
logger.addHandler(CloggerFormatter)
app['config'] = app_config
return app
And then in my app/routes.py file I access the logger with request.app.logger from within the rout definition like:
from aiohttp.web import Response, RouteTableDef
routes = RouteTableDef()
#routes.post('/background-checks')
async def api_background_check(request):
request_identifier = request.headers.get('X-Request-ID')
if not request_identifier:
request_identifier = uuid.uuid4()
request.app.logger.info('Checking background for request: %s', request_identifier)
This is my utils/handlers/logging.py file:
from time import strftime, gmtime
from logging import Formatter
class CloggerFormatter(Formatter):
"""
Logging module formatter in accordance with the yoti clogger manual
guidelines.
"""
converter = gmtime
def __init__(self, datefmt=None):
fmt = ('level:%(levelname)s'
'\ttime:%(asctime)s'
'\tmessage:%(message)s')
Formatter.__init__(self, fmt=fmt, datefmt=datefmt)
def formatTime(self, record, datefmt=None):
"""
Return the creation time of the LogRecord using the RFC 3339
format if datefmt is not specified.
"""
ct = self.converter(record.created)
if datefmt:
s = strftime(datefmt, ct)
else:
t = strftime('%Y-%m-%dT%H:%M:%S', ct)
s = '%s.%03dZ' % (t, record.msecs)
return s
Please use the root aiohttp logger: logger = logging.getLogger('aiohttp').
Particularly access logger uses 'aiohttp.access' name but maybe you want to see other log messages like errors and warnings as well.
Related
I have been trying the last couple days to build a nginx/gunicorn/flask stack in Puppet to deploy repeatedly in our environment. Unfortunately, I am coming up short at the last moment and could really use some help. I have dumped anything I though relevant below, if anyone can lend a hand it would be very helpful!
gunicorn cli errors
(pyvenv) [root#guadalupe project1]# gunicorn wsgi:application
[2022-12-01 15:07:29 -0700] [13060] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:07:29 -0700] [13060] [INFO] Listening at: http://127.0.0.1:8000 (13060)
[2022-12-01 15:07:29 -0700] [13060] [INFO] Using worker: sync
[2022-12-01 15:07:29 -0700] [13063] [INFO] Booting worker with pid: 13063
^C[2022-12-01 15:08:01 -0700] [13060] [INFO] Handling signal: int
[2022-12-01 15:08:01 -0700] [13063] [INFO] Worker exiting (pid: 13063)
[2022-12-01 15:08:01 -0700] [13060] [INFO] Shutting down: Master
(pyvenv) [root#guadalupe project1]# gunicorn wsgi:application -b project1.sock
[2022-12-01 15:08:09 -0700] [13067] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:08:09 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:10 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:11 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:12 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:13 -0700] [13067] [ERROR] Retrying in 1 second.
[2022-12-01 15:08:14 -0700] [13067] [ERROR] Can't connect to ('project1.sock', 8000)
gunicorn debug log from running
$ gunicorn wsgi:application -b project1.sock --error-logfile error.log --log-level 'debug'
[2022-12-01 15:28:04 -0700] [16349] [DEBUG] Current configuration:
config: ./gunicorn.conf.py
wsgi_app: None
bind: ['project1.sock']
backlog: 2048
workers: 1
worker_class: sync
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 30
graceful_timeout: 30
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
print_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /home/bit-web/pyvenv/project1
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 0
group: 0
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: error.log
loglevel: debug
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: wsgi:application
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x7f7b0fa6eae8>
on_reload: <function OnReload.on_reload at 0x7f7b0fa6ebf8>
when_ready: <function WhenReady.when_ready at 0x7f7b0fa6ed08>
pre_fork: <function Prefork.pre_fork at 0x7f7b0fa6ee18>
post_fork: <function Postfork.post_fork at 0x7f7b0fa6ef28>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f7b0fa860d0>
worker_int: <function WorkerInt.worker_int at 0x7f7b0fa861e0>
worker_abort: <function WorkerAbort.worker_abort at 0x7f7b0fa862f0>
pre_exec: <function PreExec.pre_exec at 0x7f7b0fa86400>
pre_request: <function PreRequest.pre_request at 0x7f7b0fa86510>
post_request: <function PostRequest.post_request at 0x7f7b0fa86598>
child_exit: <function ChildExit.child_exit at 0x7f7b0fa866a8>
worker_exit: <function WorkerExit.worker_exit at 0x7f7b0fa867b8>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f7b0fa868c8>
on_exit: <function OnExit.on_exit at 0x7f7b0fa869d8>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1']
keyfile: None
certfile: None
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
strip_header_spaces: False
[2022-12-01 15:28:04 -0700] [16349] [INFO] Starting gunicorn 20.1.0
[2022-12-01 15:28:04 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:04 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:05 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:05 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:06 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:06 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:07 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:07 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:08 -0700] [16349] [DEBUG] connection to ('project1.sock', 8000) failed: [Errno -2] Name or service not known
[2022-12-01 15:28:08 -0700] [16349] [ERROR] Retrying in 1 second.
[2022-12-01 15:28:09 -0700] [16349] [ERROR] Can't connect to ('project1.sock', 8000)
wsgi.py
from flask import Flask
application = Flask(__name__)
#application.route("/")
def hello():
return "<h1 style='color:blue'>Hello There!</h1>"
if __name__ == "__main__":
application.run(host='0.0.0.0')
the nginx.conf file that is being sourced from Puppet
(pyvenv) [root#guadalupe project1]# cat /etc/nginx/sites-enabled/project1.conf
include /etc/nginx/conf.d/*.conf;
upstream app_a {
server unix:///home/bit-web/pyvenv/project1/project1.sock;
}
server {
listen 80;
server_name guadalupe.int.colorado.edu, 172.20.13.55;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
include uwsgi_params;
uwsgi_pass app_a;
}
}
nginx error log
2022/12/01 15:02:15 [error] 11743#11743: *1 upstream prematurely closed connection while reading response header from upstream, client: 198.11.28.224, server: guadalupe.int.colorado.edu,, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///home/bit-web/pyvenv/project1/project1.sock:", host: "guadalupe.int.colorado.edu", referrer: "http://guadalupe.int.colorado.edu/"
socket information
(pyvenv) [root#guadalupe project1]# ls
project1.sock __pycache__ wsgi.py
(pyvenv) [root#guadalupe project1]# pwd
/home/bit-web/pyvenv/project1
gunicorn service
(pyvenv) [root#guadalupe project1]# cat /etc/systemd/system/gunicorn.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=bit-web
Group=nginx
WorkingDirectory=/home/bit-web/pyvenv/project1
Environment="PATH=/home/bit-web/pyvenv/bin"
ExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:project1.sock -m 007 wsgi:application
[Install]
WantedBy=multi-user.target
in your gunicorn service file, Instead of
ExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:project1.sock -m 007 wsgi:application
try this (means add .sock file path)
ExecStart=/home/bit-web/pyvenv/bin/gunicorn --workers 3 --bind unix:/home/bit-web/pyvenv/project1/project1.sock -m 007 wsgi:application
I have a Python/Flask app running on Gunicorn. For some reason, somewhere between 2 to 2.5 hours of starting the app, this always happens:
[2021-05-19 03:17:38 +0000] [26530] [DEBUG] Closing connection.
[2021-05-19 03:18:04 +0000] [26526] [WARNING] Worker with pid 26529 was terminated due to signal 1
[2021-05-19 03:18:04 +0000] [26526] [WARNING] Worker with pid 26530 was terminated due to signal 1
[2021-05-19 03:18:04 +0000] [26526] [WARNING] Worker with pid 26527 was terminated due to signal 1
[2021-05-19 03:18:04 +0000] [26526] [WARNING] Worker with pid 26528 was terminated due to signal 1
[2021-05-19 03:18:04 +0000] [27837] [INFO] Booting worker with pid: 27837
[2021-05-19 03:18:04 +0000] [26526] [DEBUG] 1 workers
[2021-05-19 03:18:04 +0000] [26526] [INFO] Handling signal: hup
[2021-05-19 03:18:04 +0000] [26526] [INFO] Hang up: Master
[2021-05-19 03:18:04 +0000] [26526] [DEBUG] Current configuration:
config: ./gunicorn.conf.py
wsgi_app: None
bind: ['0.0.0.0:8081']
backlog: 2048
workers: 4
worker_class: sync
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 9223372036
graceful_timeout: 30
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
print_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /home/ubuntu
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 0
group: 0
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: -
loglevel: debug
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: __init__:create_app()
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x7f3f9ecc3ca0>
on_reload: <function OnReload.on_reload at 0x7f3f9ecc3dc0>
when_ready: <function WhenReady.when_ready at 0x7f3f9ecc3ee0>
pre_fork: <function Prefork.pre_fork at 0x7f3f9ec5a040>
post_fork: <function Postfork.post_fork at 0x7f3f9ec5a160>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f3f9ec5a280>
worker_int: <function WorkerInt.worker_int at 0x7f3f9ec5a3a0>
worker_abort: <function WorkerAbort.worker_abort at 0x7f3f9ec5a4c0>
pre_exec: <function PreExec.pre_exec at 0x7f3f9ec5a5e0>
pre_request: <function PreRequest.pre_request at 0x7f3f9ec5a700>
post_request: <function PostRequest.post_request at 0x7f3f9ec5a790>
child_exit: <function ChildExit.child_exit at 0x7f3f9ec5a8b0>
worker_exit: <function WorkerExit.worker_exit at 0x7f3f9ec5a9d0>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f3f9ec5aaf0>
on_exit: <function OnExit.on_exit at 0x7f3f9ec5ac10>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1']
keyfile: /etc/letsencrypt/live/site.io/privkey.pem
certfile: /etc/letsencrypt/live/site.io/fullchain.pem
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
strip_header_spaces: False
[2021-05-19 03:18:04 +0000] [26526] [DEBUG] 4 workers
[2021-05-19 03:18:04 +0000] [27838] [INFO] Booting worker with pid: 27838
[2021-05-19 03:18:04 +0000] [27839] [INFO] Booting worker with pid: 27839
[2021-05-19 03:18:04 +0000] [27841] [INFO] Booting worker with pid: 27841
[2021-05-19 03:18:04 +0000] [27840] [INFO] Booting worker with pid: 27840
[2021-05-19 03:18:08 +0000] [27837] [INFO] Worker exiting (pid: 27837)
[2021-05-19 03:18:08 +0000] [26526] [WARNING] Worker with pid 27837 was terminated due to signal 15
I have no idea how or why Gunicorn is apparently receiving a HUP signal (https://docs.gunicorn.org/en/19.3/signals.html#reload-the-configuration) causing it to reload the app. Often nothing else is happening in the app - no users using it or anything else in the logs. This is a problem since if there is someone using the app, restarting it in the middle of some code running (there's a function that can take minutes to finish) will break it.
I'm running an AWS EC2 t2.medium instance, python3.9 and Gunicorn 20.1.0, launching the app as follows:
/usr/bin/python3.9 -m gunicorn.app.wsgiapp -w 4 -b 0.0.0.0:8081 -t 9223372036 '__init__:create_app()' --certfile=/etc/letsencrypt/live/site.io/fullchain.pem --keyfile=/etc/letsencrypt/live/site.io/privkey.pem --log-level 'debug'
How can I stop Gunicorn restarting like this?
I have implemented a simple microservice using Flask, where the method that handles the request calculates a response based on the request data and a rather large datastructure loaded into memory.
Now, when I deploy this application using gunicorn and a large number of worker threads, I would simply like to share the datastructure between the request handlers of all workers. Since the data is only read, there is no need for locking or similar. What is the best way to do this?
Essentially what would be needed is this:
load/create the large data structure when the server is initialized
somehow get a handle inside the request handling method to access the data structure
As far as I understand gunicorn allows me to implement various hook functions, e.g. for the time the server gets initialized, but a flask request handler method does not know anything about the gunicorn server data structure.
I do not want to use something like redis or a database system for this, since all data is in a datastructure that needs to be loaded in memory and no deserialization must be involved.
The calculation carried out for each request which uses the large data structure can be lengthy so it must happen concurrently in a truly independent thread or process for each request (this should scale up by running on a multi-core computer).
You can use preloading.
This will allow you to create the data structure ahead of time, then fork each request handling process. This works because of copy-on-write and the knowledge that you are only reading from the large data structure.
Note: Although this will work, it should probably only be used for very small apps or in a development environment. I think the more production-friendly way of doing this would be to queue up these calculations as tasks on the backend since they will be long-running. You can then notify users of the completed state.
Here is a little snippet to see the difference of preloading.
# app.py
import flask
app = flask.Flask(__name__)
def load_data():
print('calculating some stuff')
return {'big': 'data'}
#app.route('/')
def index():
return repr(data)
data = load_data()
Running with gunicorn app:app --workers 2:
[2017-02-24 09:01:01 -0500] [38392] [INFO] Starting gunicorn 19.6.0
[2017-02-24 09:01:01 -0500] [38392] [INFO] Listening at: http://127.0.0.1:8000 (38392)
[2017-02-24 09:01:01 -0500] [38392] [INFO] Using worker: sync
[2017-02-24 09:01:01 -0500] [38395] [INFO] Booting worker with pid: 38395
[2017-02-24 09:01:01 -0500] [38396] [INFO] Booting worker with pid: 38396
calculating some stuff
calculating some stuff
And running with gunicorn app:app --workers 2 --preload:
calculating some stuff
[2017-02-24 09:01:06 -0500] [38403] [INFO] Starting gunicorn 19.6.0
[2017-02-24 09:01:06 -0500] [38403] [INFO] Listening at: http://127.0.0.1:8000 (38403)
[2017-02-24 09:01:06 -0500] [38403] [INFO] Using worker: sync
[2017-02-24 09:01:06 -0500] [38406] [INFO] Booting worker with pid: 38406
[2017-02-24 09:01:06 -0500] [38407] [INFO] Booting worker with pid: 38407
I have a Flask app that runs fine locally. I pushed it to this Heroku URL:
https://secret-sierra-6425.herokuapp.com/
And the landing page works, which means the '/' route code works.
But when I try to access other resources, it throws a 500 error and I don't see anything useful in the logs:
2015-04-06T08:17:29.687713+00:00 app[web.1]: [2015-04-06 08:17:29 +0000] [3] [INFO] Shutting down: Master
2015-04-06T08:17:30.397309+00:00 app[web.1]: [2015-04-06 08:17:30 +0000] [3] [INFO] Starting gunicorn 19.3.0
2015-04-06T08:17:30.484520+00:00 app[web.1]: [2015-04-06 08:17:30 +0000] [10] [INFO] Booting worker with pid: 10
2015-04-06T08:17:30.398009+00:00 app[web.1]: [2015-04-06 08:17:30 +0000] [3] [INFO] Listening at: http://0.0.0.0:18230 (3)
2015-04-06T08:17:30.398107+00:00 app[web.1]: [2015-04-06 08:17:30 +0000] [3] [INFO] Using worker: sync
2015-04-06T08:17:30.407696+00:00 app[web.1]: [2015-04-06 08:17:30 +0000] [9] [INFO] Booting worker with pid: 9
2015-04-06T08:17:30.682319+00:00 heroku[web.1]: State changed from starting to up
2015-04-06T08:17:30.618107+00:00 heroku[web.1]: Process exited with status 0
2015-04-06T08:18:23.485060+00:00 heroku[router]: at=info method=GET path="/" host=secret-sierra-6425.herokuapp.com request_id=cbdfada9-8b28-4150-a7c0-eddc458658d9 fwd="23.252.53.59" dyno=web.1 connect=1ms service=2ms status=200 bytes=186
2015-04-06T08:18:35.630070+00:00 heroku[router]: at=info method=POST path="/messages" host=secret-sierra-6425.herokuapp.com request_id=aaecbfbd-57b5-49eb-aee0-ad450ba67e36 fwd="23.252.53.59" dyno=web.1 connect=1ms service=19ms status=500 bytes=456
This is the code to start my app:
if __name__ == '__main__':
app.debug = True #Only for development, not prod
app.logger.addHandler(logging.StreamHandler(sys.stdout))
app.logger.setLevel(logging.ERROR)
app.run()
Procfile:
web: gunicorn mailr:app --log-file=-
worker: python worker.py
My application uses redis queue as well. I've added the RedisToGo addon. Still no luck.
Can anyone help me find a way to display the logs so that I can debug what's going wrong?
\
EDIT:
Also tried changing the start to this, but still no luck:
app.debug = True #Only for development, not prod
file_handler = StreamHandler()
app.logger.setLevel(logging.DEBUG) # set the desired logging level here
app.logger.addHandler(file_handler)
app.run()
Got it. The code for logging should have been out of main
I have a Flask app with the following route:
#app.route('/')
def index():
console = logging.StreamHandler()
log = logging.getLogger("asdasd")
log.addHandler(console)
log.setLevel(logging.DEBUG)
log.error("Something")
print >> sys.stderr, "Another thing"
return 'ok'
I run this using
python gunicorn --access-logfile /mnt/log/test.log --error-logfile /mnt/log/test.log --bind 0.0.0.0:8080 --workers 2 --worker-class gevent --log-level debug server:app
The Logs are as below:
2014-06-26 00:13:55 [21621] [INFO] Using worker: gevent
2014-06-26 00:13:55 [21626] [INFO] Booting worker with pid: 21626
2014-06-26 00:13:55 [21627] [INFO] Booting worker with pid: 21627
2014-06-26 00:14:05 [21626] [DEBUG] GET /
10.224.67.41 - - [26/Jun/2014:00:14:14 +0000] "GET / HTTP/1.1" 200 525 "-" "python-requests/2.2.1 CPython/2.7.5 Darwin/13.2.0"
2014-06-26 00:14:14 [21626] [DEBUG] Closing connection.
What's happening to my logs in index method?
As of Gunicorn 19.0, gunicorn has stopped redirecting stderr to its logs.
Refer to https://github.com/benoitc/gunicorn/commit/41523188bc05fcbba840ba2e18ff67cd9df638e9