You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Notice that you're doing `filprofiler`**`python`**, rather than `filprofiler run` as you would if you were profiling the full script.
65
+
Notice that you're doing `fil-profile`**`python`**, rather than `fil-profile run` as you would if you were profiling the full script.
66
66
Only functions running for the duration of the `filprofiler.api.profile()` call will have memory profiling enabled, including of course the function you pass in.
67
67
The rest of the code will run at (close) to normal speed and configuration.
>always-on profiler called Fil4prod</a> that is fast and robust enough to run in production;
96
-
<a href="mailto:[email protected]">send me an email</a> to participate
97
-
in the alpha program.</blockquote>
95
+
I've also created an
96
+
<a href="https://pythonspeed.com/sciagraph/"
97
+
>always-on profiler called Sciagraph</a> that is fast and robust enough to run in production.</blockquote>
98
98
<br>
99
99
100
+
<h2>Learn how to reduce memory usage</h2>
101
+
102
+
<p>Need help reducing your data processing application's memory use? Check out tips and tricks <a href="https://pythonspeed.com/memory/">here</a>.</p>
103
+
100
104
<h2>Understanding the graphs</h2>
101
105
<p>The flame graphs shows the callstacks responsible for allocations at peak.</p>
102
106
103
107
<p>The wider (and the redder) the bar, the more memory was allocated by that function or its callers.
104
108
If the bar is 100% of width, that's all the allocated memory.</p>
105
109
110
+
<p>The left-right axis has no meaning!
111
+
The order of frames is somewhat arbitrary, for example beause multiple calls to the same function may well have been merged into a single callstack.
112
+
So you can't tell from the graph which allocations happened first.
113
+
All you're getting is that at peak allocation these time, these stacktraces were responsible for these allocations.
114
+
</p>
115
+
106
116
<p>The first graph shows the normal callgraph: if <tt>main()</tt> calls <tt>g()</tt> calls <tt>f()</tt>, let's say, then <tt>main()</tt> will be at the top.
107
117
The second graph shows the reverse callgraph, from <tt>f()</tt> upwards.</p>
108
118
109
119
<p>Why is the second graph useful? If <tt>f()</tt> is called from multiple places, in the first graph it will show up multiple times, at the bottom.
110
120
In the second reversed graph all calls to <tt>f()</tt> will be merged together.</p>
111
121
112
-
<p>Need help reducing your data processing application's memory use? Check out tips and tricks <a href="https://pythonspeed.com/memory/">here</a>.</p>
122
+
<h2>Understanding what Fil tracks</h2>
123
+
124
+
<p>Fil measures how much memory has been allocated; this is not the same as how much memory the process is actively using, nor is it the same as memory resident in RAM.</p>
125
+
126
+
<ul>
127
+
<li>If the data gets dumped from RAM to swap, Fil still counts it but it's not counted as resident in RAM.</li>
128
+
<li>If the memory is a large chunk of all zeros, on Linux no RAM is used by OS until you actually modify that memory, but Fil will still count it.</li>
129
+
<li>If you have memory that only gets freed on garbage collection
130
+
(this will happen if you have circular references in your data structures),
131
+
memory can be freed at inconsistent times across different runs, especially
132
+
if you're using threads.</li>
133
+
</ul>
134
+
135
+
<p>See <a href="https://pythonspeed.com/articles/measuring-memory-python/">this article</a> for more details.</p>
0 commit comments