Follow
Yura Malitsky  (Yurii Malitskyi)
Title
Cited by
Cited by
Year
Projected reflected gradient methods for monotone variational inequalities
Y Malitsky
SIAM Journal on Optimization 25 (1), 502-520, 2015
3772015
A forward-backward splitting method for monotone inclusions without cocoercivity
Y Malitsky, MK Tam
SIAM Journal on Optimization 30 (2), 1451-1472, 2020
2562020
Golden ratio algorithms for variational inequalities
Y Malitsky
Mathematical Programming 184 (1), 383-410, 2020
1912020
A first-order primal-dual algorithm with linesearch
Y Malitsky, T Pock
SIAM Journal on Optimization 28 (1), 411-432, 2018
1522018
An extragradient algorithm for monotone variational inequalities
YV Malitsky, VV Semenov
Cybernetics and Systems Analysis 50 (2), 271-277, 2014
1382014
A hybrid method without extrapolation step for solving variational inequality problems
YV Malitsky, VV Semenov
Journal of Global Optimization 61 (1), 193-202, 2015
1292015
Adaptive Gradient Descent without Descent
Y Malitsky, K Mishchenko
International Conference on Machine Learning 119, 6702-6712, 2020
1022020
Revisiting stochastic extragradient
K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky
International Conference on Artificial Intelligence and Statistics, 4573-4582, 2020
852020
Stochastic variance reduction for variational inequality methods
A Alacaoglu, Y Malitsky
Conference on Learning Theory, 778-816, 2022
762022
Shadow Douglas–Rachford splitting for monotone inclusions
ER Csetnek, Y Malitsky, MK Tam
Applied Mathematics & Optimization 80, 665-678, 2019
722019
Proximal extrapolated gradient methods for variational inequalities
Y Malitsky
Optimization Methods and Software 33 (1), 140-164, 2018
582018
A new regret analysis for Adam-type algorithms
A Alacaoglu, Y Malitsky, P Mertikopoulos, V Cevher
International conference on machine learning, 202-210, 2020
482020
Resolvent splitting for sums of monotone operators with minimal lifting
Y Malitsky, MK Tam
Mathematical Programming 201 (1), 231-262, 2023
262023
Forward-reflected-backward method with variance reduction
A Alacaoglu, Y Malitsky, V Cevher
Computational optimization and applications 80 (2), 321-346, 2021
242021
Convergence of adaptive algorithms for constrained weakly convex optimization
A Alacaoglu, Y Malitsky, V Cevher
Advances in Neural Information Processing Systems 34, 14214-14225, 2021
21*2021
The primal-dual hybrid gradient method reduces to a primal method for linearly constrained optimization problems
Y Malitsky
arXiv preprint arXiv:1706.02602, 2017
20*2017
A first-order primal-dual method with adaptivity to local smoothness
ML Vladarean, Y Malitsky, V Cevher
Advances in neural information processing systems 34, 6171-6182, 2021
172021
Block-coordinate primal-dual method for nonsmooth minimization over linear constraints
DR Luke, Y Malitsky
Large-Scale and Distributed Optimization, 121-147, 2018
122018
Beyond the golden ratio for variational inequality algorithms
A Alacaoglu, A Böhm, Y Malitsky
Journal of Machine Learning Research 24 (172), 1-33, 2023
112023
Adaptive proximal gradient method for convex optimization
Y Malitsky, K Mishchenko
arXiv preprint arXiv:2308.02261, 2023
102023
The system can't perform the operation now. Try again later.
Articles 1–20