Fix initiation interval of pooling and zeropadding layers on Vitis backend#1141
Fix initiation interval of pooling and zeropadding layers on Vitis backend#1141steltze wants to merge 3 commits intofastmachinelearning:mainfrom
Conversation
|
Is it important for the II to be 1? Generally in io_stream conv layers have a larger II. For zero-padding at least the utilization seems to go up. |
|
@jmitrevs in the model that I am working with, we only use separable convolutions. if the II=1 for the zeropadding and maxpooling, the depthwise and pointwise convolutions have smaller latecy (cycles). Depthwise-pointwise latency ~512*512=262144 (which is the image size) < zero-padding latency 787473 Yes, this change allocates more resources but since we are focusing on latency, padding and pooling seem to be the bottlenecks instead of the convolutions which does not make much sense since they don't perform such heavy computations. I can take some more measurements to get a grasp on how resource utilization scales |
|
My experience with this is somewhat opposite:
|
yes it can't actually achieve Pipelining the function for large inputs takes ages, so I avoided that entirely. |
…-backend (fastmachinelearning#1347) Merge branch 'init_interval_fix_zeropad_maxpooling' into coyote-accelerator-and-pooling

On the
Vitisbackend andio_stream, zeropadding and pooling layers don't reachII=1and are slower than for example the Conv layersType of change
Tests
Synthesized the zeropadding and pooling models in the pytests. Code achieves
II=1and Latency cycles match the tripcountInput size = 128x128x3
C-Synthesis results with Vitis HLS 2023.2
Tested also on a dummy CNN.
Test Configuration:
Checklist
pre-commiton the files I edited or added.