If I run
python -m manimlib scene.py ket_bra
My scene renders fine into the interactive viewer, but I don't get any output file, the terminal prints the following
ManimGL v1.6.1
[13:55:48] INFO Using the default configuration file, which you can modify in `c:\users\miika\manim\manimlib\default_config.yml` config.py:323
INFO If you want to create a local configuration file, you can create a file named `custom_config.yml`, or run `manimgl --config` config.py:324
WARNING You may be using Windows platform and have not specified the path of `temporary_storage`, which may cause OSError. So it is recommended to specify the `temporary_storage` in the config file config.py:290
(.yml)
(process:3548): GLib-GIO-WARNING **: 13:55:49.613: Unexpectedly, UWP app `Clipchamp.Clipchamp_2.3.0.0_neutral__yxz26nhyzhsrt' (AUMId `Clipchamp.Clipchamp_yxz26nhyzhsrt!App') supports 46 extensions but has no verbs
If I add the parameter -p to the command then the interactive window remains blank and doesn't render the scene and in no case do I get an output file which is what I'm looking for and the terminal output is the same as before. Also if it's of note the background is grey, even tho it appears to be black in all samples that I can find. I have absolutely no idea what's going on as I can't find anybody else with a similar issue. I'm using windows 10 and v1.6.1 of 3b1b manim. The scene in this case is as follows (though this issue appears regardless of what the scene is)
from manimlib import *
class ket_bra(Scene):
def construct(self):
ket_q0 = Tex(r"|q_0\rangle")
ket_0 = Tex(r"|0\rangle")
ket_1 = Tex(r"|1\rangle")
ket_0_v2 = Tex(r"|0\rangle")
ket_1_v2 = Tex(r"|1\rangle")
ket_0_v3 = Tex(r"""|0\rangle=\begin{pmatrix}
1\\
0
\end{pmatrix}""")
ket_1_v3 = Tex(r"""|1\rangle=\begin{pmatrix}
0\\
1
\end{pmatrix}""")
bra_kets = VGroup(ket_q0, ket_0, ket_1).arrange(RIGHT, buff=1)
v_bra_kets = VGroup(ket_0_v2, ket_1_v2).arrange(RIGHT, buff=1.5)
bra_kets_def = VGroup(ket_0_v3, ket_1_v3).arrange(RIGHT, buff=1.5)
self.play(Write(ket_q0), Write(ket_0), Write(ket_1))
self.wait(0.5)
self.play(FadeOut(ket_q0))
self.play(Transform(ket_0, ket_0_v2), Transform(ket_1, ket_1_v2))
self.wait(1)
self.play(Transform(ket_0, ket_0_v3), Transform(ket_1, ket_1_v3))
Well I fixed the problem, without fixing the problem, I just switched to the community edition of manim and it works exactly as intended, so if you encounter the same problem and you're using 3b1b manim version, I reccomend just switching to the community version of manim, they are mostly functionally equivalent, but the community edition appears to be less buggy at least for me. Here's a super easy installation guide for it https://docs.manim.community/en/stable/installation/windows.html
However, I won't mark this as the best answer, because I didn't really solve the problem, and I still get the following warning and have no idea what it is. However, it appears to not affect functionality so it's fine, lol.
Windows Solution
Create TempLatex directory in C drive
Find manimlib/default_config.yml in the manim (manimgl version) directory and open it with a text editor
Modify line 18: before: temporary_storage: "" after: temporary_storage: "C:/TempLatex "
Save
From:
https://github.com/3b1b/manim/issues/1661#issuecomment-996575974
I am trying to change the user of a print job in the queue, as I want to create it on a service account but send the job to another users follow-me printing queue. I'm using the win32 module in python. Here is an example of my code:
from win32 import win32print
JOB_INFO_LEVEL = 2
pclExample = open("sample.pcl")
printer_name = win32print.GetDefaultPrinter()
hPrinter = win32print.OpenPrinter(printer_name)
try:
jobID = win32print.StartDocPrinter(hPrinter, 1, ("PCL Data test", None, "RAW"))
# Here we try to change the user by extracting the job and then setting it again
jobInfoDict = win32print.GetJob(hPrinter, jobID , JOB_INFO_LEVEL )
jobInfoDict["pUserName"] = "exampleUser"
win32print.SetJob(hPrinter, jobID , JOB_INFO_LEVEL , jobInfoDict , win32print.JOB_CONTROL_RESUME )
try:
win32print.StartPagePrinter(hPrinter)
win32print.WritePrinter(hPrinter, pclExample)
win32print.EndPagePrinter(hPrinter)
finally:
win32print.EndDocPrinter(hPrinter)
finally:
win32print.ClosePrinter(hPrinter)
The problem is I get an error at the win32print.SetJob() line. If JOB_INFO_LEVEL is set to 1, then I get the following error:
(1804, 'SetJob', 'The specified datatype is invalid.')
This is a known bug to do with how the C++ works in the background (Issue here).
If JOB_INFO_LEVEL is set to 2, then I get the following error:
(1798, 'SetJob', 'The print processor is unknown.')
However, this is the processor that came from win32print.GetJob(). Without trying to change the user this prints fine, so I'm not sure what is wrong.
Any help would be hugely appreciated! :)
EDIT:
Using Python 3.8.5 and Pywin32 303
At the beginning I thought it was a misunderstanding (I was also a bit skeptical about the bug report), mainly because of the following paragraph (which apparently seems to be wrong) from [MS.Docs]: SetJob function (emphasis is mine):
The following members of a JOB_INFO_1, JOB_INFO_2, or JOB_INFO_4 structure are ignored on a call to SetJob: JobId, pPrinterName, pMachineName, pUserName, pDrivername, Size, Submitted, Time, and TotalPages.
But I did some tests and ran into the problem. The problem is as described in the bug: filling JOB_INFO_* string members (which are LPTSTRs) with char* data.
Submitted [GitHub]: mhammond/pywin32 - Fix: win32print.SetJob sending ANSI to UNICODE API (and none of the 2 errors pops up). It was merged to main on 220331.
When testing the fix, I was able to change various properties of an existing job, I was amazed that it didn't have to be valid data (like below), I'm a bit curious to see what would happen when the job would be executed (as now I don't have a connection to a printer):
Change pUserName to str(random.randint(0, 10000)) to make sure it changes on each script run (PrintScreens taken separately and assembled in Paint):
Ways to go further:
Wait for a new PyWin32 version (containing this fix) to be released. This is the recommended approach, but it will also take more time (and it's unclear when it will happen)
Get the sources, either:
from main
from b303 (last stable branch), and apply the (above) patch(1)
build the module (.pyd) and copy it in the PyWin32's site-packages directory on your Python installation(s). Faster, but it requires some deeper knowledge, and maintenance might become a nightmare
Footnotes
#1: Check [SO]: Run / Debug a Django application's UnitTests from the mouse right click context menu in PyCharm Community Edition? (#CristiFati's answer) (Patching UTRunner section) for how to apply patches (on Win).
Currently I'm using a toolbox which makes use of CoolProp. The REFPROP library is used in the coding, e.g: ... = CP.AbstractState('REFPROP', fluid).
When I run the script I get the following error message: "ValueError: You cannot use the REFPROPMixtureBackend". I don't understand how to acces the REFPROP library. I used pip install ctREFPROP and installed the "MINI-REFPROP" from NIST.
How do I introduce the REFPROP library within Python ? Is it something I have to pay for ?
Set the path of REFPROP library like this:
CoolProp.set_config_string(CoolProp.ALTERNATIVE_REFPROP_PATH, "C:\Program Files (x86)\MINI-REFPROP")
NOTE:
MINI-REFPROP contains a limited number of pure fluids (water, CO2, R134a, nitrogen, oxygen, methane, propane, helium, hydrogen, and dodecane), along with air as a pseudo-pure fluid.
I am using pygraphviz to create a large number of graphs for different configurations of data. I have found that no matter what information is put in the graph the program will crash after drawing the 170th graph. There are no error messages generated the program just stops. Is there something that needs to be reset if drawing this many graphs?
I am running Python 3.7 on a Windows 10 machine, Pygraphviz 1.5, and graphviz 2.38
for graph_number in range(200):
config_graph = pygraphviz.AGraph(strict=False, directed=False, compound=True, ranksep='0.2', nodesep='0.2')
# Create Directory
if not os.path.exists('Graph'):
os.makedirs('Graph')
# Draw Graph
print('draw_' + str(graph_number))
config_graph.layout(prog = 'dot')
config_graph.draw('Graph/'+str(graph_number)+'.png')
I was able to constantly reproduce the behavior with:
Python 3.7.6 (pc064 (64bit), then also with pc032)
PyGraphviz 1.5 (that I built - available for download at [GitHub]: CristiFati/Prebuilt-Binaries - Various software built on various platforms. (under PyGraphviz, naturally).
Might also want to check [SO]: Installing pygraphviz on Windows 10 64-bit, Python 3.6 (#CristiFati's answer))
Graphviz 2.42.2 ((pc032) same as #2.)
I suspected an Undefined Behavior somewhere in the code, even if the behavior was precisely the same:
OK for 169 graphs
Crash for 170
Did some debugging (added some print(f) statements in agraph.py, and cgraph.dll (write.c)).
PyGraphviz invokes Graphviz's tools (.exes) for many operations. For that, it uses subprocess.Popen and communicates with the child process via its 3 available streams (stdin, stdout, stderr).
From the beginning I noticed that 170 * 3 = 510 (awfully close to 512 (0x200)), but didn't pay as much attention as I should have until later (mostly because the Python process (running the code below) had no more than ~150 open handles in Task Manager (TM) and also Process Explorer (PE)).
However, a bit of Googleing revealed:
[SO]: Is there a limit on number of open files in Windows (#stackprogrammer's answer) (and from here)
[MS.Learn]: _setmaxstdio (which states (emphasis is mine)):
C run-time I/O now supports up to 8,192 files open simultaneously at the low I/O level. This level includes files opened and accessed using the _open, _read, and _write family of I/O functions. By default, up to 512 files can be open simultaneously at the stream I/O level. This level includes files opened and accessed using the fopen, fgetc, and fputc family of functions. The limit of 512 open files at the stream I/O level can be increased to a maximum of 8,192 by use of the _setmaxstdio function.
[SO]: Python: Which command increases the number of open files on Windows? (#NorthCat's answer)
Below is your code that I modified for debugging and reproducing the error. It needs (for code shortness' sake, as same thing can be achieved via CTypes) the PyWin32 package (python -m pip install pywin32).
code00.py:
#!/usr/bin/env python
import os
import sys
#import time
import pygraphviz as pgv
import win32file as wfile
def handle_graph(idx, dir_name):
graph_name = "draw_{:03d}".format(idx)
graph_args = {
"name": graph_name,
"strict": False,
"directed": False,
"compound": True,
"ranksep": "0.2",
"nodesep": "0.2",
}
graph = pgv.AGraph(**graph_args)
# Draw Graph
img_base_name = graph_name + ".png"
print(" {:s}".format(img_base_name))
graph.layout(prog="dot")
img_full_name = os.path.join(dir_name, img_base_name)
graph.draw(img_full_name)
graph.close() # !!! Has NO (visible) effect, but I think it should be called anyway !!!
def main(*argv):
print("OLD max open files: {:d}".format(wfile._getmaxstdio()))
# 513 is enough for your original code (170 graphs), but you can set it up to 8192
#wfile._setmaxstdio(513) # !!! COMMENT this line to reproduce the crash !!!
print("NEW max open files: {:d}".format(wfile._getmaxstdio()))
dir_name = "Graph"
# Create Directory
if not os.path.isdir(dir_name):
os.makedirs(dir_name)
#ts_global_start = time.time()
start = 0
count = 170
#count = 1
step_sleep = 0.05
for i in range(start, start + count):
#ts_local_start = time.time()
handle_graph(i, dir_name)
#print(" Time: {:.3f}".format(time.time() - ts_local_start))
#time.sleep(step_sleep)
handle_graph(count, dir_name)
#print("Global time: {:.3f}".format(time.time() - ts_global_start - step_sleep * count))
if __name__ == "__main__":
print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")),
64 if sys.maxsize > 0x100000000 else 32, sys.platform))
rc = main(*sys.argv[1:])
print("\nDone.\n")
sys.exit(rc)
Output:
e:\Work\Dev\StackOverflow\q060876623> "e:\Work\Dev\VEnvs\py_pc064_03.07.06_test0\Scripts\python.exe" ./code00.py
Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)] 064bit on win32
OLD max open files: 512
NEW max open files: 513
draw_000.png
draw_001.png
draw_002.png
...
draw_167.png
draw_168.png
draw_169.png
Done.
Conclusions:
Apparently, some file handles (fds) are open, although they are not "seen" by TM or PE (probably they are on a lower level). However I don't know why this happens (is it a MS UCRT bug?), but from what I am concerned, once a child process ends, its streams should be closed, but I don't know how to force it (this would be a proper fix)
Also, the behavior (crash) when attempting to write (not open) to a fd (above the limit), seems a bit strange
As a workaround, the max open fds number can be increased. Based on the following inequality: 3 * (graph_count + 1) <= max_fds, you can get an idea about the numbers. From there, if you set the limit to 8192 (I didn't test this) you should be able handle 2729 graphs (assuming that there are no additional fds opened by the code)
Side notes:
While investigating, I ran into or noticed several adjacent issues, that I tried to fix:
Graphviz:
[GitLab]: graphviz/graphviz - [Issue #1481]: MSB4018 The NativeCodeAnalysis task failed unexpectedly. (merged on 200406)
PyGraphviz:
[GitHub]: pygraphviz/pygraphviz - AGraph Graphviz handle close mechanism (merged on 200720)
There's also an issue open for this behavior (probably the same author): [GitHub]: pygraphviz/pygraphviz - Pygraphviz crashes after drawing 170 graphs
I tried you code and it generated 200 graphs with no problem (I also tried with 2000).
My suggestion is to use these versions of the packages, I installed a conda environment on mac os with python 3.7 :
graphviz 2.40.1 hefbbd9a_2
pygraphviz 1.3 py37h1de35cc_1
I run conda 4.6.3 with python 3.7.2 win32. In python, when I import numpy, i see the RAM usage increase by 80MB. Since I am using multiprocessing, I wonder if this is normal and if there is anyway to avoid this RAM overhead? Please see below all the versions from relevant packages (from conda list):
python...........3.7.2 h8c8aaf0_2
mkl_fft...........1.0.10 py37h14836fe_0
mkl_random..1.0.2 py37h343c172_0
numpy...........1.15.4 py37h19fb1c0_0
numpy-base..1.15.4 py37hc3f5095_0
thanks!
You can't avoid this cost, but it's likely not as bad as it seems. The numpy libraries (a copy of C only libopenblasp, plus all the Python numpy extension modules) occupy over 60 MB on disk, and they're all going to be memory mapped into your Python process on import; adding on all the Python modules and the dynamically allocated memory involved in loading and initializing all of them, and 80 MB of increased reported RAM usage is pretty normal.
That said:
The C libraries and Python extension modules are memory mapped in, but that doesn't actually mean they occupy "real" RAM; if the code paths in a given page aren't exercised, the page will either never be loaded, or will be dropped under memory pressure (not even written to the page file, since it can always reload it from the original DLL).
On UNIX-like systems, when you fork (multiprocessing does this by default everywhere but Windows) that memory is shared between parent and worker processes in copy-on-write mode. Since the code itself is generally not written, the only cost is the page tables themselves (a tiny fraction of the memory they reference), and both parent and child will share that RAM.
Sadly, on Windows, fork isn't an option (unless you're running Ubuntu bash on Windows, in which case it's only barely Windows, effectively Linux), so you'll likely pay more of the memory costs in each process. But even there, libopenblasp, the C library backing large parts of numpy, will be remapped per process, but the OS should properly share that read-only memory across processes (and large parts, if not all, of the Python extension modules as well).
Basically, until this actually causes a problem (and it's unlikely to do so), don't worry about it.
[NumPy]: NumPy
is the fundamental package for scientific computing with Python.
It is a big package, designed to work with large datasets and optimized (primarily) for speed. If you look in its __init__.py (which gets executed when importing it (e.g.: import numpy)), you'll notice that it imports lots of items (packages / modules):
Those items themselves, may import others
Some of them are extension modules (.pyds (.dlls) or .sos) which get loaded into the current process (their dependencies as well)
I've prepared a demo.
code.py:
#!/usr/bin/env python3
import sys
import os
import psutil
#import pprint
def main():
display_text = "This {:s} screenshot was taken. Press <Enter> to continue ... "
pid = os.getpid()
print("Pid: {:d}\n".format(pid))
p = psutil.Process(pid=pid)
mod_names0 = set(k for k in sys.modules)
mi0 = p.memory_info()
input(display_text.format("first"))
import numpy
input(display_text.format("second"))
mi1 = p.memory_info()
for idx, mi in enumerate([mi0, mi1], start=1):
print("\nMemory info ({:d}): {:}".format(idx, mi))
print("\nExtra modules imported by `{:s}` :".format(numpy.__name__))
print(sorted(set(k for k in sys.modules) - mod_names0))
#pprint.pprint({k: v for k, v in sys.modules.items() if k not in mod_names0})
print("\nDone.")
if __name__ == "__main__":
print("Python {:s} on {:s}\n".format(sys.version, sys.platform))
main()
Output:
[cfati#CFATI-5510-0:e:\Work\Dev\StackOverflow\q054675983]> "e:\Work\Dev\VEnvs\py_064_03.06.08_test0\Scripts\python.exe" code.py
Python 3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit (AMD64)] on win32
Pid: 27160
This first screenshot was taken. Press <Enter> to continue ...
This second screenshot was taken. Press <Enter> to continue ...
Memory info (1): pmem(rss=15491072, vms=8458240, num_page_faults=4149, peak_wset=15495168, wset=15491072, peak_paged_pool=181160, paged_pool=180984, peak_nonpaged_pool=13720, nonpaged_pool=13576, pagefile=8458240, peak_pagefile=8458240, private=8458240)
Memory info (2): pmem(rss=27156480, vms=253882368, num_page_faults=7283, peak_wset=27205632, wset=27156480, peak_paged_pool=272160, paged_pool=272160, peak_nonpaged_pool=21640, nonpaged_pool=21056, pagefile=253882368, peak_pagefile=253972480, private=253882368)
Extra modules imported by `numpy` :
['_ast', '_bisect', '_blake2', '_compat_pickle', '_ctypes', '_decimal', '_hashlib', '_pickle', '_random', '_sha3', '_string', '_struct', 'argparse', 'ast', 'atexit', 'bisect', 'copy', 'ctypes', 'ctypes._endian', 'cython_runtime', 'decimal', 'difflib', 'gc', 'gettext', 'hashlib', 'logging', 'mtrand', 'numbers', 'numpy', 'numpy.__config__', 'numpy._distributor_init', 'numpy._globals', 'numpy._import_tools', 'numpy.add_newdocs', 'numpy.compat', 'numpy.compat._inspect', 'numpy.compat.py3k', 'numpy.core', 'numpy.core._internal', 'numpy.core._methods', 'numpy.core._multiarray_tests', 'numpy.core.arrayprint', 'numpy.core.defchararray', 'numpy.core.einsumfunc', 'numpy.core.fromnumeric', 'numpy.core.function_base', 'numpy.core.getlimits', 'numpy.core.info', 'numpy.core.machar', 'numpy.core.memmap', 'numpy.core.multiarray', 'numpy.core.numeric', 'numpy.core.numerictypes', 'numpy.core.records', 'numpy.core.shape_base', 'numpy.core.umath', 'numpy.ctypeslib', 'numpy.fft', 'numpy.fft.fftpack', 'numpy.fft.fftpack_lite', 'numpy.fft.helper', 'numpy.fft.info', 'numpy.lib', 'numpy.lib._datasource', 'numpy.lib._iotools', 'numpy.lib._version', 'numpy.lib.arraypad', 'numpy.lib.arraysetops', 'numpy.lib.arrayterator', 'numpy.lib.financial', 'numpy.lib.format', 'numpy.lib.function_base', 'numpy.lib.histograms', 'numpy.lib.index_tricks', 'numpy.lib.info', 'numpy.lib.mixins', 'numpy.lib.nanfunctions', 'numpy.lib.npyio', 'numpy.lib.polynomial', 'numpy.lib.scimath', 'numpy.lib.shape_base', 'numpy.lib.stride_tricks', 'numpy.lib.twodim_base', 'numpy.lib.type_check', 'numpy.lib.ufunclike', 'numpy.lib.utils', 'numpy.linalg', 'numpy.linalg._umath_linalg', 'numpy.linalg.info', 'numpy.linalg.lapack_lite', 'numpy.linalg.linalg', 'numpy.ma', 'numpy.ma.core', 'numpy.ma.extras', 'numpy.matrixlib', 'numpy.matrixlib.defmatrix', 'numpy.polynomial', 'numpy.polynomial._polybase', 'numpy.polynomial.chebyshev', 'numpy.polynomial.hermite', 'numpy.polynomial.hermite_e', 'numpy.polynomial.laguerre', 'numpy.polynomial.legendre', 'numpy.polynomial.polynomial', 'numpy.polynomial.polyutils', 'numpy.random', 'numpy.random.info', 'numpy.random.mtrand', 'numpy.testing', 'numpy.testing._private', 'numpy.testing._private.decorators', 'numpy.testing._private.nosetester', 'numpy.testing._private.pytesttester', 'numpy.testing._private.utils', 'numpy.version', 'pathlib', 'pickle', 'pprint', 'random', 'string', 'struct', 'tempfile', 'textwrap', 'unittest', 'unittest.case', 'unittest.loader', 'unittest.main', 'unittest.result', 'unittest.runner', 'unittest.signals', 'unittest.suite', 'unittest.util', 'urllib', 'urllib.parse']
Done.
And the (before and after import) screenshots ([MS.Docs]: Process Explorer):
As a personal remark, I think that ~80 MiB (or whatever the exact amount is), is more than decent for the current "era", which is characterized by ridiculously high amounts of hardware resources, especially in the memories area. Besides, that would probably insignificant, compared to the amount required by the arrays themselves. If it's not the case, you should probably consider moving away from numpy.
There could be a way to reduce the memory footprint, by selectively importing only the modules containing the features that you need (my personal advice is against it), and thus going around __init__.py:
You'd have to be an expert in numpy's internals
Modules must be imported "manually" (by file name), using [Python 3]: importlib - The implementation of import (or alternatives)
Their dependents will be imported / loaded as well (and because of this, I don't know how much free memory you'd gain)