Panagou, N., Oikonomou, P., Papadopoulos, P., Koziri, M., Loukopoulos, T., Iakovidis, D.
1st Workshop on Pervasive Intelligence (1st PEINT EANN 2019)
Publication year: 2019

Video coding incurs high computational complexity particularly at the encoder side. For this reason, parallelism is used at the various encoding steps. One of the popular coarse grained parallelization tools offered by many standards is wavefront parallelism. Under the scheme, each row of blocks is as-signed to a separate thread for processing. A thread might commence encoding a particular block once certain precedence constraints are met, namely, it is re-quired that the left block of the same row and the top and top-right block of the previous row have finished compression. Clearly, the imposed constraints result in processing delays. Therefore, in order to optimize performance, it is of para-mount importance to properly identify potential bottlenecks before the com-pression of a frame starts, in order to alleviate them through better resource al-location. In this paper we present a simulation model that predicts bottlenecks based on the estimated block compression times produced from a regression neural network. Experiments with datasets obtained using the reference encoder of HEVC (High Efficiency Video Coding) illustrate the merits of the proposed model.

2 Responses to “[C14] On Predicting Bottlenecks in Wavefront Parallel Video Coding Using Deep Neural Networks”


    You really make it seem so easy with your presentation but I find this topic to be actually something that I think I would never understand. It seems too complex and very broad for me. I’m looking forward for your next post, I’ll try to get the hang of it!

    • admin

      Thanks a lot for the comments.
      We also hope to submit a journal extension soon enough.

Leave a Reply

Your email address will not be published. Required fields are marked *