1.
Malita, Mihaela; Popescu, George Vladut; Stefan, Gheorghe M.
Heterogeneous Computing for Markov Models in Big Data Proceedings Article
In: 2019 International Conference on Computational Science and Computational Intelligence (CSCI), pp. 1500-1505, 2019.
Abstract | Links | BibTeX | Tags: Hidden Markov models;Graphics processing units;Acceleration;Computational modeling;Markov processes;Viterbi algorithm;Data models;Big data;Markov models;parallel architecture;accelerators;hetrogenous computing
@inproceedings{9071100,
title = {Heterogeneous Computing for Markov Models in Big Data},
author = {Mihaela Malita and George Vladut Popescu and Gheorghe M. Stefan},
doi = {10.1109/CSCI49370.2019.00279},
year = {2019},
date = {2019-12-01},
booktitle = {2019 International Conference on Computational Science and Computational Intelligence (CSCI)},
pages = {1500-1505},
abstract = {Many Big Data problems, Markov Model related included, are solved using heterogenous systems: host + parallel programmable accelerator. The current solutions for the accelerator part - for example, GPU used as GPGPU - provide limited accelerations due to some architectural constraints. The paper introduces the use of a programmable parallel accelerator able to perform efficient vector and matrix operations avoiding the limitations of the current systems designed using "of-theshelf" solutions. Our main result is an architecture whose actual performance is a much higher percentage from its peak performance than those of the consecrated accelerators. The performance improvements we offer come from the following two features: the addition of a reduction network at the output of a linear array of cells and an appropriate use of a serial register distributed along the same linear array of cells. Thus, for a n-state Markov Model, instead of a solution with the size in O(n2) and an acceleration in O(n2=logn), we offer an accelerator with the size in O(n) and the acceleration in O(n).},
keywords = {Hidden Markov models;Graphics processing units;Acceleration;Computational modeling;Markov processes;Viterbi algorithm;Data models;Big data;Markov models;parallel architecture;accelerators;hetrogenous computing},
pubstate = {published},
tppubtype = {inproceedings}
}
Many Big Data problems, Markov Model related included, are solved using heterogenous systems: host + parallel programmable accelerator. The current solutions for the accelerator part - for example, GPU used as GPGPU - provide limited accelerations due to some architectural constraints. The paper introduces the use of a programmable parallel accelerator able to perform efficient vector and matrix operations avoiding the limitations of the current systems designed using "of-theshelf" solutions. Our main result is an architecture whose actual performance is a much higher percentage from its peak performance than those of the consecrated accelerators. The performance improvements we offer come from the following two features: the addition of a reduction network at the output of a linear array of cells and an appropriate use of a serial register distributed along the same linear array of cells. Thus, for a n-state Markov Model, instead of a solution with the size in O(n2) and an acceleration in O(n2=logn), we offer an accelerator with the size in O(n) and the acceleration in O(n).