How to avoid high CPU usage with pysnmp - python

I am using pysnmp and have encountered high CPU usage. I know netsnmp is written in C and pysnmp in Python, so I would expect the CPU usage times to be about 20-100% higher because of that. Instead I am seeing 20 times higher CPU usage times.
Am I using pysnmp correctly or could I do something to make it use less resources?
Test case 1 - PySNMP:
from pysnmp.entity.rfc3413.oneliner import cmdgen
import config
import yappi
yappi.start()
cmdGen = cmdgen.CommandGenerator()
errorIndication, errorStatus, errorIndex, varBindTable = cmdGen.nextCmd(
cmdgen.CommunityData(config.COMMUNITY),
cmdgen.UdpTransportTarget((config.HOST, config.PORT)),
config.OID,
lexicographicMode=False,
ignoreNonIncreasingOid=True,
lookupValue=False, lookupNames=False
)
for varBindTableRow in varBindTable:
for name, val in varBindTableRow:
print('%s' % (val,))
yappi.get_func_stats().print_all()
Test case 2 - NetSNMP:
import argparse
import netsnmp
import config
import yappi
yappi.start()
oid = netsnmp.VarList(netsnmp.Varbind('.'+config.OID))
res = netsnmp.snmpwalk(oid, Version = 2, DestHost=config.HOST, Community=config.COMMUNITY)
print(res)
yappi.get_func_stats().print_all()
If someone wants to test for himself, both test cases need a small file with settings, config.py:
HOST = '192.168.1.111'
COMMUNITY = 'public'
PORT = 161
OID = '1.3.6.1.2.1.2.2.1.8'
I have compared the returned values and they are the same - so both examples function correctly. The difference is in timings:
PySNMP:
Clock type: cpu
Ordered by: totaltime, desc
name #n tsub ttot tavg
..dgen.py:408 CommandGenerator.nextCmd 1 0.000108 1.890072 1.890072
..:31 AsynsockDispatcher.runDispatcher 1 0.005068 1.718650 1.718650
..r/lib/python2.7/asyncore.py:125 poll 144 0.010087 1.707852 0.011860
/usr/lib/python2.7/asyncore.py:81 read 72 0.001191 1.665637 0.023134
..UdpSocketTransport.handle_read_event 72 0.001301 1.664446 0.023117
..py:75 UdpSocketTransport.handle_read 72 0.001888 1.663145 0.023099
..base.py:32 AsynsockDispatcher._cbFun 72 0.001766 1.658938 0.023041
..:55 SnmpEngine.__receiveMessageCbFun 72 0.002194 1.656747 0.023010
..4 MsgAndPduDispatcher.receiveMessage 72 0.008587 1.654553 0.022980
..eProcessingModel.prepareDataElements 72 0.014170 0.831581 0.011550
../ber/decoder.py:585 Decoder.__call__ 1224/216 0.111002 0.801783 0.000655
...py:312 SequenceDecoder.valueDecoder 288/144 0.034554 0.757069 0.002629
..tCommandGenerator.processResponsePdu 72 0.008425 0.730610 0.010147
..NextCommandGenerator._handleResponse 72 0.008692 0.712964 0.009902
...
NetSNMP:
Clock type: cpu
Ordered by: totaltime, desc
name #n tsub ttot tavg
..kages/netsnmp/client.py:227 snmpwalk 1 0.000076 0.103274 0.103274
..s/netsnmp/client.py:173 Session.walk 1 0.000024 0.077640 0.077640
..etsnmp/client.py:48 Varbind.__init__ 72 0.008860 0.035225 0.000489
..tsnmp/client.py:111 Session.__init__ 1 0.000055 0.025551 0.025551
...
So, netsnmp uses 0.103 s of CPU time and pysnmp uses 1.890 s of CPU time for the same operation. I find the results surprising... I have also tested the asynchronous mode, but the results were even a bit worse.
Am I doing something wrong (with pysnmp)?
UPDATE:
As per Ilya's suggestion, I have tryed using BULK instead of WALK. BULK is indeed much faster overall, but PySNMP still uses cca. 20x CPU time in comparison to netsnmp:
..dgen.py:496 CommandGenerator.bulkCmd 1 0.000105 0.726187 0.726187
Netsnmp:
..es/netsnmp/client.py:216 snmpgetbulk 1 0.000109 0.044421 0.044421
So the question still stands - can I make pySNMP less CPU intensive? Am I using it incorrectly?

Try using GETBULK instead of GETNEXT. With your code and Max-Repetitions=25 setting it gives 5x times performance improvement on my synthetic test.

Related

Memory leak in python-igraph with induced_subgraph function?

I'm working with igraph in Python and encountered a problem when calling induced_subgraph function a million times - the problem is, there is a consumption of memory.
import gc
import igraph
import resource
g = igraph.Graph(n=4, edges=[[0, 1], [0,2], [0, 3]])
mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
for i in range (1,1000001):
igraph_list_v = [0,1,2,3]
s = g.induced_subgraph(igraph_list_v)
del s
if i%100000 == 0:
gc.collect()
print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss - mem)
mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
Output:
0
4752
0
4752
0
4752
0
4752
0
4488
0
4752
0
4732
0
4752
0
4752
0
4752
Zeros are from gc.collect() function. Based on output, the induced_subgraph function consumes 4752 kilobytes every 100000 calls.
Printed memory is increasing with each print statement. This is a dummy case, but I call induced_subgraph function many million times and it breaks my script due to memory error. What is a possible workaround and what is actually reason for such behavior? Also keep in mind that I am on beginner level, so keep answer as simple as possible. Thank you for help!

How to get CPU and RAM usage from windows machine using python? [duplicate]

How can I get the current system status (current CPU, RAM, free disk space, etc.) in Python? Ideally, it would work for both Unix and Windows platforms.
There seems to be a few possible ways of extracting that from my search:
Using a library such as PSI (that currently seems not actively developed and not supported on multiple platforms) or something like pystatgrab (again no activity since 2007 it seems and no support for Windows).
Using platform specific code such as using a os.popen("ps") or similar for the *nix systems and MEMORYSTATUS in ctypes.windll.kernel32 (see this recipe on ActiveState) for the Windows platform. One could put a Python class together with all those code snippets.
It's not that those methods are bad but is there already a well-supported, multi-platform way of doing the same thing?
The psutil library gives you information about CPU, RAM, etc., on a variety of platforms:
psutil is a module providing an interface for retrieving information on running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by tools like ps, top and Windows task manager.
It currently supports Linux, Windows, OSX, Sun Solaris, FreeBSD, OpenBSD and NetBSD, both 32-bit and 64-bit architectures, with Python versions from 2.6 to 3.5 (users of Python 2.4 and 2.5 may use 2.1.3 version).
Some examples:
#!/usr/bin/env python
import psutil
# gives a single float value
psutil.cpu_percent()
# gives an object with many fields
psutil.virtual_memory()
# you can convert that object to a dictionary
dict(psutil.virtual_memory()._asdict())
# you can have the percentage of used RAM
psutil.virtual_memory().percent
79.2
# you can calculate percentage of available memory
psutil.virtual_memory().available * 100 / psutil.virtual_memory().total
20.8
Here's other documentation that provides more concepts and interest concepts:
https://psutil.readthedocs.io/en/latest/
Use the psutil library. On Ubuntu 18.04, pip installed 5.5.0 (latest version) as of 1-30-2019. Older versions may behave somewhat differently.
You can check your version of psutil by doing this in Python:
from __future__ import print_function # for Python2
import psutil
print(psutil.__versi‌​on__)
To get some memory and CPU stats:
from __future__ import print_function
import psutil
print(psutil.cpu_percent())
print(psutil.virtual_memory()) # physical memory usage
print('memory % used:', psutil.virtual_memory()[2])
The virtual_memory (tuple) will have the percent memory used system-wide. This seemed to be overestimated by a few percent for me on Ubuntu 18.04.
You can also get the memory used by the current Python instance:
import os
import psutil
pid = os.getpid()
python_process = psutil.Process(pid)
memoryUse = python_process.memory_info()[0]/2.**30 # memory use in GB...I think
print('memory use:', memoryUse)
which gives the current memory use of your Python script.
There are some more in-depth examples on the pypi page for psutil.
Only for Linux:
One-liner for the RAM usage with only stdlib dependency:
import os
tot_m, used_m, free_m = map(int, os.popen('free -t -m').readlines()[-1].split()[1:])
One can get real time CPU and RAM monitoring by combining tqdm and psutil. It may be handy when running heavy computations / processing.
It also works in Jupyter without any code changes:
from tqdm import tqdm
from time import sleep
import psutil
with tqdm(total=100, desc='cpu%', position=1) as cpubar, tqdm(total=100, desc='ram%', position=0) as rambar:
while True:
rambar.n=psutil.virtual_memory().percent
cpubar.n=psutil.cpu_percent()
rambar.refresh()
cpubar.refresh()
sleep(0.5)
It's convenient to put those progress bars in separate process using multiprocessing library.
This code snippet is also available as a gist.
Below codes, without external libraries worked for me. I tested at Python 2.7.9
CPU Usage
import os
CPU_Pct=str(round(float(os.popen('''grep 'cpu ' /proc/stat | awk '{usage=($2+$4)*100/($2+$4+$5)} END {print usage }' ''').readline()),2))
print("CPU Usage = " + CPU_Pct) # print results
And Ram Usage, Total, Used and Free
import os
mem=str(os.popen('free -t -m').readlines())
"""
Get a whole line of memory output, it will be something like below
[' total used free shared buffers cached\n',
'Mem: 925 591 334 14 30 355\n',
'-/+ buffers/cache: 205 719\n',
'Swap: 99 0 99\n',
'Total: 1025 591 434\n']
So, we need total memory, usage and free memory.
We should find the index of capital T which is unique at this string
"""
T_ind=mem.index('T')
"""
Than, we can recreate the string with this information. After T we have,
"Total: " which has 14 characters, so we can start from index of T +14
and last 4 characters are also not necessary.
We can create a new sub-string using this information
"""
mem_G=mem[T_ind+14:-4]
"""
The result will be like
1025 603 422
we need to find first index of the first space, and we can start our substring
from from 0 to this index number, this will give us the string of total memory
"""
S1_ind=mem_G.index(' ')
mem_T=mem_G[0:S1_ind]
"""
Similarly we will create a new sub-string, which will start at the second value.
The resulting string will be like
603 422
Again, we should find the index of first space and than the
take the Used Memory and Free memory.
"""
mem_G1=mem_G[S1_ind+8:]
S2_ind=mem_G1.index(' ')
mem_U=mem_G1[0:S2_ind]
mem_F=mem_G1[S2_ind+8:]
print 'Summary = ' + mem_G
print 'Total Memory = ' + mem_T +' MB'
print 'Used Memory = ' + mem_U +' MB'
print 'Free Memory = ' + mem_F +' MB'
To get a line-by-line memory and time analysis of your program, I suggest using memory_profiler and line_profiler.
Installation:
# Time profiler
$ pip install line_profiler
# Memory profiler
$ pip install memory_profiler
# Install the dependency for a faster analysis
$ pip install psutil
The common part is, you specify which function you want to analyse by using the respective decorators.
Example: I have several functions in my Python file main.py that I want to analyse. One of them is linearRegressionfit(). I need to use the decorator #profile that helps me profile the code with respect to both: Time & Memory.
Make the following changes to the function definition
#profile
def linearRegressionfit(Xt,Yt,Xts,Yts):
lr=LinearRegression()
model=lr.fit(Xt,Yt)
predict=lr.predict(Xts)
# More Code
For Time Profiling,
Run:
$ kernprof -l -v main.py
Output
Total time: 0.181071 s
File: main.py
Function: linearRegressionfit at line 35
Line # Hits Time Per Hit % Time Line Contents
==============================================================
35 #profile
36 def linearRegressionfit(Xt,Yt,Xts,Yts):
37 1 52.0 52.0 0.1 lr=LinearRegression()
38 1 28942.0 28942.0 75.2 model=lr.fit(Xt,Yt)
39 1 1347.0 1347.0 3.5 predict=lr.predict(Xts)
40
41 1 4924.0 4924.0 12.8 print("train Accuracy",lr.score(Xt,Yt))
42 1 3242.0 3242.0 8.4 print("test Accuracy",lr.score(Xts,Yts))
For Memory Profiling,
Run:
$ python -m memory_profiler main.py
Output
Filename: main.py
Line # Mem usage Increment Line Contents
================================================
35 125.992 MiB 125.992 MiB #profile
36 def linearRegressionfit(Xt,Yt,Xts,Yts):
37 125.992 MiB 0.000 MiB lr=LinearRegression()
38 130.547 MiB 4.555 MiB model=lr.fit(Xt,Yt)
39 130.547 MiB 0.000 MiB predict=lr.predict(Xts)
40
41 130.547 MiB 0.000 MiB print("train Accuracy",lr.score(Xt,Yt))
42 130.547 MiB 0.000 MiB print("test Accuracy",lr.score(Xts,Yts))
Also, the memory profiler results can also be plotted using matplotlib using
$ mprof run main.py
$ mprof plot
Note: Tested on
line_profiler version == 3.0.2
memory_profiler version == 0.57.0
psutil version == 5.7.0
EDIT: The results from the profilers can be parsed using the TAMPPA package. Using it, we can get line-by-line desired plots as
We chose to use usual information source for this because we could find instantaneous fluctuations in free memory and felt querying the meminfo data source was helpful. This also helped us get a few more related parameters that were pre-parsed.
Code
import os
linux_filepath = "/proc/meminfo"
meminfo = dict(
(i.split()[0].rstrip(":"), int(i.split()[1]))
for i in open(linux_filepath).readlines()
)
meminfo["memory_total_gb"] = meminfo["MemTotal"] / (2 ** 20)
meminfo["memory_free_gb"] = meminfo["MemFree"] / (2 ** 20)
meminfo["memory_available_gb"] = meminfo["MemAvailable"] / (2 ** 20)
Output for reference (we stripped all newlines for further analysis)
MemTotal: 1014500 kB MemFree: 562680 kB MemAvailable: 646364 kB
Buffers: 15144 kB Cached: 210720 kB SwapCached: 0 kB Active: 261476 kB
Inactive: 128888 kB Active(anon): 167092 kB Inactive(anon): 20888 kB
Active(file): 94384 kB Inactive(file): 108000 kB Unevictable: 3652 kB
Mlocked: 3652 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback:
0 kB AnonPages: 168160 kB Mapped: 81352 kB Shmem: 21060 kB Slab: 34492
kB SReclaimable: 18044 kB SUnreclaim: 16448 kB KernelStack: 2672 kB
PageTables: 8180 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB
CommitLimit: 507248 kB Committed_AS: 1038756 kB VmallocTotal:
34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted:
0 kB AnonHugePages: 88064 kB CmaTotal: 0 kB CmaFree: 0 kB
HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp:
0 Hugepagesize: 2048 kB DirectMap4k: 43008 kB DirectMap2M: 1005568 kB
Here's something I put together a while ago, it's windows only but may help you get part of what you need done.
Derived from:
"for sys available mem"
http://msdn2.microsoft.com/en-us/library/aa455130.aspx
"individual process information and python script examples"
http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true
NOTE: the WMI interface/process is also available for performing similar tasks
I'm not using it here because the current method covers my needs, but if someday it's needed to extend or improve this, then may want to investigate the WMI tools a vailable.
WMI for python:
http://tgolden.sc.sabren.com/python/wmi.html
The code:
'''
Monitor window processes
derived from:
>for sys available mem
http://msdn2.microsoft.com/en-us/library/aa455130.aspx
> individual process information and python script examples
http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true
NOTE: the WMI interface/process is also available for performing similar tasks
I'm not using it here because the current method covers my needs, but if someday it's needed
to extend or improve this module, then may want to investigate the WMI tools available.
WMI for python:
http://tgolden.sc.sabren.com/python/wmi.html
'''
__revision__ = 3
import win32com.client
from ctypes import *
from ctypes.wintypes import *
import pythoncom
import pywintypes
import datetime
class MEMORYSTATUS(Structure):
_fields_ = [
('dwLength', DWORD),
('dwMemoryLoad', DWORD),
('dwTotalPhys', DWORD),
('dwAvailPhys', DWORD),
('dwTotalPageFile', DWORD),
('dwAvailPageFile', DWORD),
('dwTotalVirtual', DWORD),
('dwAvailVirtual', DWORD),
]
def winmem():
x = MEMORYSTATUS() # create the structure
windll.kernel32.GlobalMemoryStatus(byref(x)) # from cytypes.wintypes
return x
class process_stats:
'''process_stats is able to provide counters of (all?) the items available in perfmon.
Refer to the self.supported_types keys for the currently supported 'Performance Objects'
To add logging support for other data you can derive the necessary data from perfmon:
---------
perfmon can be run from windows 'run' menu by entering 'perfmon' and enter.
Clicking on the '+' will open the 'add counters' menu,
From the 'Add Counters' dialog, the 'Performance object' is the self.support_types key.
--> Where spaces are removed and symbols are entered as text (Ex. # == Number, % == Percent)
For the items you wish to log add the proper attribute name in the list in the self.supported_types dictionary,
keyed by the 'Performance Object' name as mentioned above.
---------
NOTE: The 'NETFramework_NETCLRMemory' key does not seem to log dotnet 2.0 properly.
Initially the python implementation was derived from:
http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true
'''
def __init__(self,process_name_list=[],perf_object_list=[],filter_list=[]):
'''process_names_list == the list of all processes to log (if empty log all)
perf_object_list == list of process counters to log
filter_list == list of text to filter
print_results == boolean, output to stdout
'''
pythoncom.CoInitialize() # Needed when run by the same process in a thread
self.process_name_list = process_name_list
self.perf_object_list = perf_object_list
self.filter_list = filter_list
self.win32_perf_base = 'Win32_PerfFormattedData_'
# Define new datatypes here!
self.supported_types = {
'NETFramework_NETCLRMemory': [
'Name',
'NumberTotalCommittedBytes',
'NumberTotalReservedBytes',
'NumberInducedGC',
'NumberGen0Collections',
'NumberGen1Collections',
'NumberGen2Collections',
'PromotedMemoryFromGen0',
'PromotedMemoryFromGen1',
'PercentTimeInGC',
'LargeObjectHeapSize'
],
'PerfProc_Process': [
'Name',
'PrivateBytes',
'ElapsedTime',
'IDProcess',# pid
'Caption',
'CreatingProcessID',
'Description',
'IODataBytesPersec',
'IODataOperationsPersec',
'IOOtherBytesPersec',
'IOOtherOperationsPersec',
'IOReadBytesPersec',
'IOReadOperationsPersec',
'IOWriteBytesPersec',
'IOWriteOperationsPersec'
]
}
def get_pid_stats(self, pid):
this_proc_dict = {}
pythoncom.CoInitialize() # Needed when run by the same process in a thread
if not self.perf_object_list:
perf_object_list = self.supported_types.keys()
for counter_type in perf_object_list:
strComputer = "."
objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator")
objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2")
query_str = '''Select * from %s%s''' % (self.win32_perf_base,counter_type)
colItems = objSWbemServices.ExecQuery(query_str) # "Select * from Win32_PerfFormattedData_PerfProc_Process")# changed from Win32_Thread
if len(colItems) > 0:
for objItem in colItems:
if hasattr(objItem, 'IDProcess') and pid == objItem.IDProcess:
for attribute in self.supported_types[counter_type]:
eval_str = 'objItem.%s' % (attribute)
this_proc_dict[attribute] = eval(eval_str)
this_proc_dict['TimeStamp'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.') + str(datetime.datetime.now().microsecond)[:3]
break
return this_proc_dict
def get_stats(self):
'''
Show process stats for all processes in given list, if none given return all processes
If filter list is defined return only the items that match or contained in the list
Returns a list of result dictionaries
'''
pythoncom.CoInitialize() # Needed when run by the same process in a thread
proc_results_list = []
if not self.perf_object_list:
perf_object_list = self.supported_types.keys()
for counter_type in perf_object_list:
strComputer = "."
objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator")
objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2")
query_str = '''Select * from %s%s''' % (self.win32_perf_base,counter_type)
colItems = objSWbemServices.ExecQuery(query_str) # "Select * from Win32_PerfFormattedData_PerfProc_Process")# changed from Win32_Thread
try:
if len(colItems) > 0:
for objItem in colItems:
found_flag = False
this_proc_dict = {}
if not self.process_name_list:
found_flag = True
else:
# Check if process name is in the process name list, allow print if it is
for proc_name in self.process_name_list:
obj_name = objItem.Name
if proc_name.lower() in obj_name.lower(): # will log if contains name
found_flag = True
break
if found_flag:
for attribute in self.supported_types[counter_type]:
eval_str = 'objItem.%s' % (attribute)
this_proc_dict[attribute] = eval(eval_str)
this_proc_dict['TimeStamp'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.') + str(datetime.datetime.now().microsecond)[:3]
proc_results_list.append(this_proc_dict)
except pywintypes.com_error, err_msg:
# Ignore and continue (proc_mem_logger calls this function once per second)
continue
return proc_results_list
def get_sys_stats():
''' Returns a dictionary of the system stats'''
pythoncom.CoInitialize() # Needed when run by the same process in a thread
x = winmem()
sys_dict = {
'dwAvailPhys': x.dwAvailPhys,
'dwAvailVirtual':x.dwAvailVirtual
}
return sys_dict
if __name__ == '__main__':
# This area used for testing only
sys_dict = get_sys_stats()
stats_processor = process_stats(process_name_list=['process2watch'],perf_object_list=[],filter_list=[])
proc_results = stats_processor.get_stats()
for result_dict in proc_results:
print result_dict
import os
this_pid = os.getpid()
this_proc_results = stats_processor.get_pid_stats(this_pid)
print 'this proc results:'
print this_proc_results
I feel like these answers were written for Python 2, and in any case nobody's made mention of the standard resource package that's available for Python 3. It provides commands for obtaining the resource limits of a given process (the calling Python process by default). This isn't the same as getting the current usage of resources by the system as a whole, but it could solve some of the same problems like e.g. "I want to make sure I only use X much RAM with this script."
This aggregate all the goodies:
psutil + os to get Unix & Windows compatibility:
That allows us to get:
CPU
memory
disk
code:
import os
import psutil # need: pip install psutil
In [32]: psutil.virtual_memory()
Out[32]: svmem(total=6247907328, available=2502328320, percent=59.9, used=3327135744, free=167067648, active=3671199744, inactive=1662668800, buffers=844783616, cached=1908920320, shared=123912192, slab=613048320)
In [33]: psutil.virtual_memory().percent
Out[33]: 60.0
In [34]: psutil.cpu_percent()
Out[34]: 5.5
In [35]: os.sep
Out[35]: '/'
In [36]: psutil.disk_usage(os.sep)
Out[36]: sdiskusage(total=50190790656, used=41343860736, free=6467502080, percent=86.5)
In [37]: psutil.disk_usage(os.sep).percent
Out[37]: 86.5
Taken feedback from first response and done small changes
#!/usr/bin/env python
#Execute commond on windows machine to install psutil>>>>python -m pip install psutil
import psutil
print (' ')
print ('----------------------CPU Information summary----------------------')
print (' ')
# gives a single float value
vcc=psutil.cpu_count()
print ('Total number of CPUs :',vcc)
vcpu=psutil.cpu_percent()
print ('Total CPUs utilized percentage :',vcpu,'%')
print (' ')
print ('----------------------RAM Information summary----------------------')
print (' ')
# you can convert that object to a dictionary
#print(dict(psutil.virtual_memory()._asdict()))
# gives an object with many fields
vvm=psutil.virtual_memory()
x=dict(psutil.virtual_memory()._asdict())
def forloop():
for i in x:
print (i,"--",x[i]/1024/1024/1024)#Output will be printed in GBs
forloop()
print (' ')
print ('----------------------RAM Utilization summary----------------------')
print (' ')
# you can have the percentage of used RAM
print('Percentage of used RAM :',psutil.virtual_memory().percent,'%')
#79.2
# you can calculate percentage of available memory
print('Percentage of available RAM :',psutil.virtual_memory().available * 100 / psutil.virtual_memory().total,'%')
#20.8
"... current system status (current CPU, RAM, free disk space, etc.)" And "*nix and Windows platforms" can be a difficult combination to achieve.
The operating systems are fundamentally different in the way they manage these resources. Indeed, they differ in core concepts like defining what counts as system and what counts as application time.
"Free disk space"? What counts as "disk space?" All partitions of all devices? What about foreign partitions in a multi-boot environment?
I don't think there's a clear enough consensus between Windows and *nix that makes this possible. Indeed, there may not even be any consensus between the various operating systems called Windows. Is there a single Windows API that works for both XP and Vista?
This script for CPU usage:
import os
def get_cpu_load():
""" Returns a list CPU Loads"""
result = []
cmd = "WMIC CPU GET LoadPercentage "
response = os.popen(cmd + ' 2>&1','r').read().strip().split("\r\n")
for load in response[1:]:
result.append(int(load))
return result
if __name__ == '__main__':
print get_cpu_load()
For CPU details use psutil library
https://psutil.readthedocs.io/en/latest/#cpu
For RAM Frequency (in MHz) use the built in Linux library dmidecode and manipulate the output a bit ;). this command needs root permission hence supply your password too. just copy the following commend replacing mypass with your password
import os
os.system("echo mypass | sudo -S dmidecode -t memory | grep 'Clock Speed' | cut -d ':' -f2")
------------------- Output ---------------------------
1600 MT/s
Unknown
1600 MT/s
Unknown 0
more specificly
[i for i in os.popen("echo mypass | sudo -S dmidecode -t memory | grep 'Clock Speed' | cut -d ':' -f2").read().split(' ') if i.isdigit()]
-------------------------- output -------------------------
['1600', '1600']
you can read /proc/meminfo to get used memory
file1 = open('/proc/meminfo', 'r')
for line in file1:
if 'MemTotal' in line:
x = line.split()
memTotal = int(x[1])
if 'Buffers' in line:
x = line.split()
buffers = int(x[1])
if 'Cached' in line and 'SwapCached' not in line:
x = line.split()
cached = int(x[1])
if 'MemFree' in line:
x = line.split()
memFree = int(x[1])
file1.close()
percentage_used = int ( ( memTotal - (buffers + cached + memFree) ) / memTotal * 100 )
print(percentage_used)
Based on the cpu usage code by #Hrabal, this is what I use:
from subprocess import Popen, PIPE
def get_cpu_usage():
''' Get CPU usage on Linux by reading /proc/stat '''
sub = Popen(('grep', 'cpu', '/proc/stat'), stdout=PIPE, stderr=PIPE)
top_vals = [int(val) for val in sub.communicate()[0].split('\n')[0].split[1:5]]
return (top_vals[0] + top_vals[2]) * 100. /(top_vals[0] + top_vals[2] + top_vals[3])
You can use psutil or psmem with subprocess
example code
import subprocess
cmd = subprocess.Popen(['sudo','./ps_mem'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
out,error = cmd.communicate()
memory = out.splitlines()
Reference
https://github.com/Leo-g/python-flask-cmd
You can always use the library recently released SystemScripter by using the command pip install SystemScripter. This is a library that uses the other library like psutil among others to create a full library of system information that spans from CPU to disk information.
For current CPU usage use the function:
SystemScripter.CPU.CpuPerCurrentUtil(SystemScripter.CPU()) #class init as self param if not work
This gets the usage percentage or use:
SystemScripter.CPU.CpuCurrentUtil(SystemScripter.CPU())
https://pypi.org/project/SystemScripter/#description
Run with crontab won't print pid
Setup: */1 * * * * sh dog.sh this line in crontab -e
import os
import re
CUT_OFF = 90
def get_cpu_load():
cmd = "ps -Ao user,uid,comm,pid,pcpu --sort=-pcpu | head -n 2 | tail -1"
response = os.popen(cmd, 'r').read()
arr = re.findall(r'\S+', response)
print(arr)
needKill = float(arr[-1]) > CUT_OFF
if needKill:
r = os.popen(f"kill -9 {arr[-2]}")
print('kill:', r)
if __name__ == '__main__':
# Test CPU with
# $ stress --cpu 1
# crontab -e
# Every 1 min
# */1 * * * * sh dog.sh
# ctlr o, ctlr x
# crontab -l
print(get_cpu_load())
Shell-out not needed for #CodeGench's solution, so assuming Linux and Python's standard libraries:
def cpu_load():
with open("/proc/stat", "r") as stat:
(key, user, nice, system, idle, _) = (stat.readline().split(None, 5))
assert key == "cpu", "'cpu ...' should be the first line in /proc/stat"
busy = int(user) + int(nice) + int(system)
return 100 * busy / (busy + int(idle))
I don't believe that there is a well-supported multi-platform library available. Remember that Python itself is written in C so any library is simply going to make a smart decision about which OS-specific code snippet to run, as you suggested above.

heapy reports memory usage << top

NB: This is my first foray into memory profiling with Python, so perhaps I'm asking the wrong question here. Advice re improving the question appreciated.
I'm working on some code where I need to store a few million small strings in a set. This, according to top, is using ~3x the amount of memory reported by heapy. I'm not clear what all this extra memory is used for and how I can go about figuring out whether I can - and if so how to - reduce the footprint.
memtest.py:
from guppy import hpy
import gc
hp = hpy()
# do setup here - open files & init the class that holds the data
print 'gc', gc.collect()
hp.setrelheap()
raw_input('relheap set - enter to continue') # top shows 14MB resident for python
# load data from files into the class
print 'gc', gc.collect()
h = hp.heap()
print h
raw_input('enter to quit') # top shows 743MB resident for python
The output is:
$ python memtest.py
gc 5
relheap set - enter to continue
gc 2
Partition of a set of 3197065 objects. Total size = 263570944 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 3197061 100 263570168 100 263570168 100 str
1 1 0 448 0 263570616 100 types.FrameType
2 1 0 280 0 263570896 100 dict (no owner)
3 1 0 24 0 263570920 100 float
4 1 0 24 0 263570944 100 int
So in summary, heapy shows 264MB while top shows 743MB. What's using the extra 500MB?
Update:
I'm running 64 bit python on Ubuntu 12.04 in VirtualBox in Windows 7.
I installed guppy as per the answer here:
sudo pip install https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy

Calculating computational time and memory for a code in python

Can some body help me as how to find how much time and how much memory does it take for a code in python?
Use this for calculating time:
import time
time_start = time.clock()
#run your code
time_elapsed = (time.clock() - time_start)
As referenced by the Python documentation:
time.clock()
On Unix, return the current processor time as a floating
point number expressed in seconds. The precision, and in fact the very
definition of the meaning of “processor time”, depends on that of the
C function of the same name, but in any case, this is the function to
use for benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
Reference: http://docs.python.org/library/time.html
Use this for calculating memory:
import resource
resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
Reference: http://docs.python.org/library/resource.html
Based on #Daniel Li's answer for cut&paste convenience and Python 3.x compatibility:
import time
import resource
time_start = time.perf_counter()
# insert code here ...
time_elapsed = (time.perf_counter() - time_start)
memMb=resource.getrusage(resource.RUSAGE_SELF).ru_maxrss/1024.0/1024.0
print ("%5.1f secs %5.1f MByte" % (time_elapsed,memMb))
Example:
2.3 secs 140.8 MByte
There is a really good library called jackedCodeTimerPy for timing your code. You should then use resource package that Daniel Li suggested.
jackedCodeTimerPy gives really good reports like
label min max mean total run count
------- ----------- ----------- ----------- ----------- -----------
imports 0.00283813 0.00283813 0.00283813 0.00283813 1
loop 5.96046e-06 1.50204e-05 6.71864e-06 0.000335932 50
I like how it gives you statistics on it and the number of times the timer is run.
It's simple to use. If i want to measure the time code takes in a for loop i just do the following:
from jackedCodeTimerPY import JackedTiming
JTimer = JackedTiming()
for i in range(50):
JTimer.start('loop') # 'loop' is the name of the timer
doSomethingHere = 'This is really useful!'
JTimer.stop('loop')
print(JTimer.report()) # prints the timing report
You can can also have multiple timers running at the same time.
JTimer.start('first timer')
JTimer.start('second timer')
do_something = 'amazing'
JTimer.stop('first timer')
do_something = 'else'
JTimer.stop('second timer')
print(JTimer.report()) # prints the timing report
There are more use example in the repo. Hope this helps.
https://github.com/BebeSparkelSparkel/jackedCodeTimerPY
Use a memory profiler like guppy
>>> from guppy import hpy; h=hpy()
>>> h.heap()
Partition of a set of 48477 objects. Total size = 3265516 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 25773 53 1612820 49 1612820 49 str
1 11699 24 483960 15 2096780 64 tuple
2 174 0 241584 7 2338364 72 dict of module
3 3478 7 222592 7 2560956 78 types.CodeType
4 3296 7 184576 6 2745532 84 function
5 401 1 175112 5 2920644 89 dict of class
6 108 0 81888 3 3002532 92 dict (no owner)
7 114 0 79632 2 3082164 94 dict of type
8 117 0 51336 2 3133500 96 type
9 667 1 24012 1 3157512 97 __builtin__.wrapper_descriptor
<76 more rows. Type e.g. '_.more' to view.>
>>> h.iso(1,[],{})
Partition of a set of 3 objects. Total size = 176 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1 33 136 77 136 77 dict (no owner)
1 1 33 28 16 164 93 list
2 1 33 12 7 176 100 int
>>> x=[]
>>> h.iso(x).sp
0: h.Root.i0_modules['__main__'].__dict__['x']

How to get current CPU and RAM usage in Python?

How can I get the current system status (current CPU, RAM, free disk space, etc.) in Python? Ideally, it would work for both Unix and Windows platforms.
There seems to be a few possible ways of extracting that from my search:
Using a library such as PSI (that currently seems not actively developed and not supported on multiple platforms) or something like pystatgrab (again no activity since 2007 it seems and no support for Windows).
Using platform specific code such as using a os.popen("ps") or similar for the *nix systems and MEMORYSTATUS in ctypes.windll.kernel32 (see this recipe on ActiveState) for the Windows platform. One could put a Python class together with all those code snippets.
It's not that those methods are bad but is there already a well-supported, multi-platform way of doing the same thing?
The psutil library gives you information about CPU, RAM, etc., on a variety of platforms:
psutil is a module providing an interface for retrieving information on running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by tools like ps, top and Windows task manager.
It currently supports Linux, Windows, OSX, Sun Solaris, FreeBSD, OpenBSD and NetBSD, both 32-bit and 64-bit architectures, with Python versions from 2.6 to 3.5 (users of Python 2.4 and 2.5 may use 2.1.3 version).
Some examples:
#!/usr/bin/env python
import psutil
# gives a single float value
psutil.cpu_percent()
# gives an object with many fields
psutil.virtual_memory()
# you can convert that object to a dictionary
dict(psutil.virtual_memory()._asdict())
# you can have the percentage of used RAM
psutil.virtual_memory().percent
79.2
# you can calculate percentage of available memory
psutil.virtual_memory().available * 100 / psutil.virtual_memory().total
20.8
Here's other documentation that provides more concepts and interest concepts:
https://psutil.readthedocs.io/en/latest/
Use the psutil library. On Ubuntu 18.04, pip installed 5.5.0 (latest version) as of 1-30-2019. Older versions may behave somewhat differently.
You can check your version of psutil by doing this in Python:
from __future__ import print_function # for Python2
import psutil
print(psutil.__versi‌​on__)
To get some memory and CPU stats:
from __future__ import print_function
import psutil
print(psutil.cpu_percent())
print(psutil.virtual_memory()) # physical memory usage
print('memory % used:', psutil.virtual_memory()[2])
The virtual_memory (tuple) will have the percent memory used system-wide. This seemed to be overestimated by a few percent for me on Ubuntu 18.04.
You can also get the memory used by the current Python instance:
import os
import psutil
pid = os.getpid()
python_process = psutil.Process(pid)
memoryUse = python_process.memory_info()[0]/2.**30 # memory use in GB...I think
print('memory use:', memoryUse)
which gives the current memory use of your Python script.
There are some more in-depth examples on the pypi page for psutil.
Only for Linux:
One-liner for the RAM usage with only stdlib dependency:
import os
tot_m, used_m, free_m = map(int, os.popen('free -t -m').readlines()[-1].split()[1:])
One can get real time CPU and RAM monitoring by combining tqdm and psutil. It may be handy when running heavy computations / processing.
It also works in Jupyter without any code changes:
from tqdm import tqdm
from time import sleep
import psutil
with tqdm(total=100, desc='cpu%', position=1) as cpubar, tqdm(total=100, desc='ram%', position=0) as rambar:
while True:
rambar.n=psutil.virtual_memory().percent
cpubar.n=psutil.cpu_percent()
rambar.refresh()
cpubar.refresh()
sleep(0.5)
It's convenient to put those progress bars in separate process using multiprocessing library.
This code snippet is also available as a gist.
Below codes, without external libraries worked for me. I tested at Python 2.7.9
CPU Usage
import os
CPU_Pct=str(round(float(os.popen('''grep 'cpu ' /proc/stat | awk '{usage=($2+$4)*100/($2+$4+$5)} END {print usage }' ''').readline()),2))
print("CPU Usage = " + CPU_Pct) # print results
And Ram Usage, Total, Used and Free
import os
mem=str(os.popen('free -t -m').readlines())
"""
Get a whole line of memory output, it will be something like below
[' total used free shared buffers cached\n',
'Mem: 925 591 334 14 30 355\n',
'-/+ buffers/cache: 205 719\n',
'Swap: 99 0 99\n',
'Total: 1025 591 434\n']
So, we need total memory, usage and free memory.
We should find the index of capital T which is unique at this string
"""
T_ind=mem.index('T')
"""
Than, we can recreate the string with this information. After T we have,
"Total: " which has 14 characters, so we can start from index of T +14
and last 4 characters are also not necessary.
We can create a new sub-string using this information
"""
mem_G=mem[T_ind+14:-4]
"""
The result will be like
1025 603 422
we need to find first index of the first space, and we can start our substring
from from 0 to this index number, this will give us the string of total memory
"""
S1_ind=mem_G.index(' ')
mem_T=mem_G[0:S1_ind]
"""
Similarly we will create a new sub-string, which will start at the second value.
The resulting string will be like
603 422
Again, we should find the index of first space and than the
take the Used Memory and Free memory.
"""
mem_G1=mem_G[S1_ind+8:]
S2_ind=mem_G1.index(' ')
mem_U=mem_G1[0:S2_ind]
mem_F=mem_G1[S2_ind+8:]
print 'Summary = ' + mem_G
print 'Total Memory = ' + mem_T +' MB'
print 'Used Memory = ' + mem_U +' MB'
print 'Free Memory = ' + mem_F +' MB'
To get a line-by-line memory and time analysis of your program, I suggest using memory_profiler and line_profiler.
Installation:
# Time profiler
$ pip install line_profiler
# Memory profiler
$ pip install memory_profiler
# Install the dependency for a faster analysis
$ pip install psutil
The common part is, you specify which function you want to analyse by using the respective decorators.
Example: I have several functions in my Python file main.py that I want to analyse. One of them is linearRegressionfit(). I need to use the decorator #profile that helps me profile the code with respect to both: Time & Memory.
Make the following changes to the function definition
#profile
def linearRegressionfit(Xt,Yt,Xts,Yts):
lr=LinearRegression()
model=lr.fit(Xt,Yt)
predict=lr.predict(Xts)
# More Code
For Time Profiling,
Run:
$ kernprof -l -v main.py
Output
Total time: 0.181071 s
File: main.py
Function: linearRegressionfit at line 35
Line # Hits Time Per Hit % Time Line Contents
==============================================================
35 #profile
36 def linearRegressionfit(Xt,Yt,Xts,Yts):
37 1 52.0 52.0 0.1 lr=LinearRegression()
38 1 28942.0 28942.0 75.2 model=lr.fit(Xt,Yt)
39 1 1347.0 1347.0 3.5 predict=lr.predict(Xts)
40
41 1 4924.0 4924.0 12.8 print("train Accuracy",lr.score(Xt,Yt))
42 1 3242.0 3242.0 8.4 print("test Accuracy",lr.score(Xts,Yts))
For Memory Profiling,
Run:
$ python -m memory_profiler main.py
Output
Filename: main.py
Line # Mem usage Increment Line Contents
================================================
35 125.992 MiB 125.992 MiB #profile
36 def linearRegressionfit(Xt,Yt,Xts,Yts):
37 125.992 MiB 0.000 MiB lr=LinearRegression()
38 130.547 MiB 4.555 MiB model=lr.fit(Xt,Yt)
39 130.547 MiB 0.000 MiB predict=lr.predict(Xts)
40
41 130.547 MiB 0.000 MiB print("train Accuracy",lr.score(Xt,Yt))
42 130.547 MiB 0.000 MiB print("test Accuracy",lr.score(Xts,Yts))
Also, the memory profiler results can also be plotted using matplotlib using
$ mprof run main.py
$ mprof plot
Note: Tested on
line_profiler version == 3.0.2
memory_profiler version == 0.57.0
psutil version == 5.7.0
EDIT: The results from the profilers can be parsed using the TAMPPA package. Using it, we can get line-by-line desired plots as
We chose to use usual information source for this because we could find instantaneous fluctuations in free memory and felt querying the meminfo data source was helpful. This also helped us get a few more related parameters that were pre-parsed.
Code
import os
linux_filepath = "/proc/meminfo"
meminfo = dict(
(i.split()[0].rstrip(":"), int(i.split()[1]))
for i in open(linux_filepath).readlines()
)
meminfo["memory_total_gb"] = meminfo["MemTotal"] / (2 ** 20)
meminfo["memory_free_gb"] = meminfo["MemFree"] / (2 ** 20)
meminfo["memory_available_gb"] = meminfo["MemAvailable"] / (2 ** 20)
Output for reference (we stripped all newlines for further analysis)
MemTotal: 1014500 kB MemFree: 562680 kB MemAvailable: 646364 kB
Buffers: 15144 kB Cached: 210720 kB SwapCached: 0 kB Active: 261476 kB
Inactive: 128888 kB Active(anon): 167092 kB Inactive(anon): 20888 kB
Active(file): 94384 kB Inactive(file): 108000 kB Unevictable: 3652 kB
Mlocked: 3652 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback:
0 kB AnonPages: 168160 kB Mapped: 81352 kB Shmem: 21060 kB Slab: 34492
kB SReclaimable: 18044 kB SUnreclaim: 16448 kB KernelStack: 2672 kB
PageTables: 8180 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB
CommitLimit: 507248 kB Committed_AS: 1038756 kB VmallocTotal:
34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted:
0 kB AnonHugePages: 88064 kB CmaTotal: 0 kB CmaFree: 0 kB
HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp:
0 Hugepagesize: 2048 kB DirectMap4k: 43008 kB DirectMap2M: 1005568 kB
Here's something I put together a while ago, it's windows only but may help you get part of what you need done.
Derived from:
"for sys available mem"
http://msdn2.microsoft.com/en-us/library/aa455130.aspx
"individual process information and python script examples"
http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true
NOTE: the WMI interface/process is also available for performing similar tasks
I'm not using it here because the current method covers my needs, but if someday it's needed to extend or improve this, then may want to investigate the WMI tools a vailable.
WMI for python:
http://tgolden.sc.sabren.com/python/wmi.html
The code:
'''
Monitor window processes
derived from:
>for sys available mem
http://msdn2.microsoft.com/en-us/library/aa455130.aspx
> individual process information and python script examples
http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true
NOTE: the WMI interface/process is also available for performing similar tasks
I'm not using it here because the current method covers my needs, but if someday it's needed
to extend or improve this module, then may want to investigate the WMI tools available.
WMI for python:
http://tgolden.sc.sabren.com/python/wmi.html
'''
__revision__ = 3
import win32com.client
from ctypes import *
from ctypes.wintypes import *
import pythoncom
import pywintypes
import datetime
class MEMORYSTATUS(Structure):
_fields_ = [
('dwLength', DWORD),
('dwMemoryLoad', DWORD),
('dwTotalPhys', DWORD),
('dwAvailPhys', DWORD),
('dwTotalPageFile', DWORD),
('dwAvailPageFile', DWORD),
('dwTotalVirtual', DWORD),
('dwAvailVirtual', DWORD),
]
def winmem():
x = MEMORYSTATUS() # create the structure
windll.kernel32.GlobalMemoryStatus(byref(x)) # from cytypes.wintypes
return x
class process_stats:
'''process_stats is able to provide counters of (all?) the items available in perfmon.
Refer to the self.supported_types keys for the currently supported 'Performance Objects'
To add logging support for other data you can derive the necessary data from perfmon:
---------
perfmon can be run from windows 'run' menu by entering 'perfmon' and enter.
Clicking on the '+' will open the 'add counters' menu,
From the 'Add Counters' dialog, the 'Performance object' is the self.support_types key.
--> Where spaces are removed and symbols are entered as text (Ex. # == Number, % == Percent)
For the items you wish to log add the proper attribute name in the list in the self.supported_types dictionary,
keyed by the 'Performance Object' name as mentioned above.
---------
NOTE: The 'NETFramework_NETCLRMemory' key does not seem to log dotnet 2.0 properly.
Initially the python implementation was derived from:
http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true
'''
def __init__(self,process_name_list=[],perf_object_list=[],filter_list=[]):
'''process_names_list == the list of all processes to log (if empty log all)
perf_object_list == list of process counters to log
filter_list == list of text to filter
print_results == boolean, output to stdout
'''
pythoncom.CoInitialize() # Needed when run by the same process in a thread
self.process_name_list = process_name_list
self.perf_object_list = perf_object_list
self.filter_list = filter_list
self.win32_perf_base = 'Win32_PerfFormattedData_'
# Define new datatypes here!
self.supported_types = {
'NETFramework_NETCLRMemory': [
'Name',
'NumberTotalCommittedBytes',
'NumberTotalReservedBytes',
'NumberInducedGC',
'NumberGen0Collections',
'NumberGen1Collections',
'NumberGen2Collections',
'PromotedMemoryFromGen0',
'PromotedMemoryFromGen1',
'PercentTimeInGC',
'LargeObjectHeapSize'
],
'PerfProc_Process': [
'Name',
'PrivateBytes',
'ElapsedTime',
'IDProcess',# pid
'Caption',
'CreatingProcessID',
'Description',
'IODataBytesPersec',
'IODataOperationsPersec',
'IOOtherBytesPersec',
'IOOtherOperationsPersec',
'IOReadBytesPersec',
'IOReadOperationsPersec',
'IOWriteBytesPersec',
'IOWriteOperationsPersec'
]
}
def get_pid_stats(self, pid):
this_proc_dict = {}
pythoncom.CoInitialize() # Needed when run by the same process in a thread
if not self.perf_object_list:
perf_object_list = self.supported_types.keys()
for counter_type in perf_object_list:
strComputer = "."
objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator")
objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2")
query_str = '''Select * from %s%s''' % (self.win32_perf_base,counter_type)
colItems = objSWbemServices.ExecQuery(query_str) # "Select * from Win32_PerfFormattedData_PerfProc_Process")# changed from Win32_Thread
if len(colItems) > 0:
for objItem in colItems:
if hasattr(objItem, 'IDProcess') and pid == objItem.IDProcess:
for attribute in self.supported_types[counter_type]:
eval_str = 'objItem.%s' % (attribute)
this_proc_dict[attribute] = eval(eval_str)
this_proc_dict['TimeStamp'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.') + str(datetime.datetime.now().microsecond)[:3]
break
return this_proc_dict
def get_stats(self):
'''
Show process stats for all processes in given list, if none given return all processes
If filter list is defined return only the items that match or contained in the list
Returns a list of result dictionaries
'''
pythoncom.CoInitialize() # Needed when run by the same process in a thread
proc_results_list = []
if not self.perf_object_list:
perf_object_list = self.supported_types.keys()
for counter_type in perf_object_list:
strComputer = "."
objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator")
objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2")
query_str = '''Select * from %s%s''' % (self.win32_perf_base,counter_type)
colItems = objSWbemServices.ExecQuery(query_str) # "Select * from Win32_PerfFormattedData_PerfProc_Process")# changed from Win32_Thread
try:
if len(colItems) > 0:
for objItem in colItems:
found_flag = False
this_proc_dict = {}
if not self.process_name_list:
found_flag = True
else:
# Check if process name is in the process name list, allow print if it is
for proc_name in self.process_name_list:
obj_name = objItem.Name
if proc_name.lower() in obj_name.lower(): # will log if contains name
found_flag = True
break
if found_flag:
for attribute in self.supported_types[counter_type]:
eval_str = 'objItem.%s' % (attribute)
this_proc_dict[attribute] = eval(eval_str)
this_proc_dict['TimeStamp'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.') + str(datetime.datetime.now().microsecond)[:3]
proc_results_list.append(this_proc_dict)
except pywintypes.com_error, err_msg:
# Ignore and continue (proc_mem_logger calls this function once per second)
continue
return proc_results_list
def get_sys_stats():
''' Returns a dictionary of the system stats'''
pythoncom.CoInitialize() # Needed when run by the same process in a thread
x = winmem()
sys_dict = {
'dwAvailPhys': x.dwAvailPhys,
'dwAvailVirtual':x.dwAvailVirtual
}
return sys_dict
if __name__ == '__main__':
# This area used for testing only
sys_dict = get_sys_stats()
stats_processor = process_stats(process_name_list=['process2watch'],perf_object_list=[],filter_list=[])
proc_results = stats_processor.get_stats()
for result_dict in proc_results:
print result_dict
import os
this_pid = os.getpid()
this_proc_results = stats_processor.get_pid_stats(this_pid)
print 'this proc results:'
print this_proc_results
I feel like these answers were written for Python 2, and in any case nobody's made mention of the standard resource package that's available for Python 3. It provides commands for obtaining the resource limits of a given process (the calling Python process by default). This isn't the same as getting the current usage of resources by the system as a whole, but it could solve some of the same problems like e.g. "I want to make sure I only use X much RAM with this script."
This aggregate all the goodies:
psutil + os to get Unix & Windows compatibility:
That allows us to get:
CPU
memory
disk
code:
import os
import psutil # need: pip install psutil
In [32]: psutil.virtual_memory()
Out[32]: svmem(total=6247907328, available=2502328320, percent=59.9, used=3327135744, free=167067648, active=3671199744, inactive=1662668800, buffers=844783616, cached=1908920320, shared=123912192, slab=613048320)
In [33]: psutil.virtual_memory().percent
Out[33]: 60.0
In [34]: psutil.cpu_percent()
Out[34]: 5.5
In [35]: os.sep
Out[35]: '/'
In [36]: psutil.disk_usage(os.sep)
Out[36]: sdiskusage(total=50190790656, used=41343860736, free=6467502080, percent=86.5)
In [37]: psutil.disk_usage(os.sep).percent
Out[37]: 86.5
Taken feedback from first response and done small changes
#!/usr/bin/env python
#Execute commond on windows machine to install psutil>>>>python -m pip install psutil
import psutil
print (' ')
print ('----------------------CPU Information summary----------------------')
print (' ')
# gives a single float value
vcc=psutil.cpu_count()
print ('Total number of CPUs :',vcc)
vcpu=psutil.cpu_percent()
print ('Total CPUs utilized percentage :',vcpu,'%')
print (' ')
print ('----------------------RAM Information summary----------------------')
print (' ')
# you can convert that object to a dictionary
#print(dict(psutil.virtual_memory()._asdict()))
# gives an object with many fields
vvm=psutil.virtual_memory()
x=dict(psutil.virtual_memory()._asdict())
def forloop():
for i in x:
print (i,"--",x[i]/1024/1024/1024)#Output will be printed in GBs
forloop()
print (' ')
print ('----------------------RAM Utilization summary----------------------')
print (' ')
# you can have the percentage of used RAM
print('Percentage of used RAM :',psutil.virtual_memory().percent,'%')
#79.2
# you can calculate percentage of available memory
print('Percentage of available RAM :',psutil.virtual_memory().available * 100 / psutil.virtual_memory().total,'%')
#20.8
"... current system status (current CPU, RAM, free disk space, etc.)" And "*nix and Windows platforms" can be a difficult combination to achieve.
The operating systems are fundamentally different in the way they manage these resources. Indeed, they differ in core concepts like defining what counts as system and what counts as application time.
"Free disk space"? What counts as "disk space?" All partitions of all devices? What about foreign partitions in a multi-boot environment?
I don't think there's a clear enough consensus between Windows and *nix that makes this possible. Indeed, there may not even be any consensus between the various operating systems called Windows. Is there a single Windows API that works for both XP and Vista?
This script for CPU usage:
import os
def get_cpu_load():
""" Returns a list CPU Loads"""
result = []
cmd = "WMIC CPU GET LoadPercentage "
response = os.popen(cmd + ' 2>&1','r').read().strip().split("\r\n")
for load in response[1:]:
result.append(int(load))
return result
if __name__ == '__main__':
print get_cpu_load()
For CPU details use psutil library
https://psutil.readthedocs.io/en/latest/#cpu
For RAM Frequency (in MHz) use the built in Linux library dmidecode and manipulate the output a bit ;). this command needs root permission hence supply your password too. just copy the following commend replacing mypass with your password
import os
os.system("echo mypass | sudo -S dmidecode -t memory | grep 'Clock Speed' | cut -d ':' -f2")
------------------- Output ---------------------------
1600 MT/s
Unknown
1600 MT/s
Unknown 0
more specificly
[i for i in os.popen("echo mypass | sudo -S dmidecode -t memory | grep 'Clock Speed' | cut -d ':' -f2").read().split(' ') if i.isdigit()]
-------------------------- output -------------------------
['1600', '1600']
you can read /proc/meminfo to get used memory
file1 = open('/proc/meminfo', 'r')
for line in file1:
if 'MemTotal' in line:
x = line.split()
memTotal = int(x[1])
if 'Buffers' in line:
x = line.split()
buffers = int(x[1])
if 'Cached' in line and 'SwapCached' not in line:
x = line.split()
cached = int(x[1])
if 'MemFree' in line:
x = line.split()
memFree = int(x[1])
file1.close()
percentage_used = int ( ( memTotal - (buffers + cached + memFree) ) / memTotal * 100 )
print(percentage_used)
Based on the cpu usage code by #Hrabal, this is what I use:
from subprocess import Popen, PIPE
def get_cpu_usage():
''' Get CPU usage on Linux by reading /proc/stat '''
sub = Popen(('grep', 'cpu', '/proc/stat'), stdout=PIPE, stderr=PIPE)
top_vals = [int(val) for val in sub.communicate()[0].split('\n')[0].split[1:5]]
return (top_vals[0] + top_vals[2]) * 100. /(top_vals[0] + top_vals[2] + top_vals[3])
You can use psutil or psmem with subprocess
example code
import subprocess
cmd = subprocess.Popen(['sudo','./ps_mem'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
out,error = cmd.communicate()
memory = out.splitlines()
Reference
https://github.com/Leo-g/python-flask-cmd
You can always use the library recently released SystemScripter by using the command pip install SystemScripter. This is a library that uses the other library like psutil among others to create a full library of system information that spans from CPU to disk information.
For current CPU usage use the function:
SystemScripter.CPU.CpuPerCurrentUtil(SystemScripter.CPU()) #class init as self param if not work
This gets the usage percentage or use:
SystemScripter.CPU.CpuCurrentUtil(SystemScripter.CPU())
https://pypi.org/project/SystemScripter/#description
Run with crontab won't print pid
Setup: */1 * * * * sh dog.sh this line in crontab -e
import os
import re
CUT_OFF = 90
def get_cpu_load():
cmd = "ps -Ao user,uid,comm,pid,pcpu --sort=-pcpu | head -n 2 | tail -1"
response = os.popen(cmd, 'r').read()
arr = re.findall(r'\S+', response)
print(arr)
needKill = float(arr[-1]) > CUT_OFF
if needKill:
r = os.popen(f"kill -9 {arr[-2]}")
print('kill:', r)
if __name__ == '__main__':
# Test CPU with
# $ stress --cpu 1
# crontab -e
# Every 1 min
# */1 * * * * sh dog.sh
# ctlr o, ctlr x
# crontab -l
print(get_cpu_load())
Shell-out not needed for #CodeGench's solution, so assuming Linux and Python's standard libraries:
def cpu_load():
with open("/proc/stat", "r") as stat:
(key, user, nice, system, idle, _) = (stat.readline().split(None, 5))
assert key == "cpu", "'cpu ...' should be the first line in /proc/stat"
busy = int(user) + int(nice) + int(system)
return 100 * busy / (busy + int(idle))
I don't believe that there is a well-supported multi-platform library available. Remember that Python itself is written in C so any library is simply going to make a smart decision about which OS-specific code snippet to run, as you suggested above.

Categories

Resources