Linglong Kong

Date: | Thursday, November 1, 2018 |
---|

Matrix factorization has wide applications in recommender systems and signal processing. Existing matrix factorization methods are mostly based on squared loss and aim to yield a low-rank matrix to interpret conditional sample means. However, in many real applications with extreme data, least squares cannot explain their central tendency or tail distributions, incurring undesired estimates. In this paper, we study quantile matrix factorization (QMF), which introduces the check loss originated from quantile regression into matrix factorization. However, the non-smoothness of the check loss has brought significant challenges to numerical computation. We propose a nearly optimal and efficient algorithm to solve QMF by extending Nesterov's optimal smooth approximation procedure to the case of matrix factorization. We theoretically show that under certain conditions, the optimal solution to the proposed smooth approximation will converge to the optimal solution to the original nonsmooth and nonconvex QMF problem, with competitive convergence rates. Extensive simulations based on synthetic and real-world data have been conducted to verify our theoretical findings as well as algorithm performance.

Important Dates

December 10 – December 21: Fall Term Exam Period

December 22 – January 2: Winter Holiday (University Closed)

News

Upcoming Exams

**
STAT 4100
A01
Final Exam
**

Wednesday, December 19
at
9:00 a.m.

**
STAT 3170
A01
Final Exam
**

Wednesday, December 19
at
6:00 p.m.

Upcoming Seminar

Statistics seminar:
**Erfan Houqe**:
“Random effects covariance matrix modeling for longitudinal data with covariates measurement error”
—
Thursday, January 17 at 2:45 p.m.,
P230 Duff Roblin.

Where are they now?

Xuan Li, Ph.D. (2012)

Robert Platt, M.Sc. (1993)