Seguir
Sinho Chewi
Título
Citado por
Citado por
Ano
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
S Chen, S Chewi, J Li, Y Li, A Salim, AR Zhang
International Conference on Learning Representations, 2023
2222023
Gradient descent algorithms for Bures-Wasserstein barycenters
S Chewi, T Maunu, P Rigollet, AJ Stromme
Conference on Learning Theory 2020 125, 1276-1304, 2020
1032020
Analysis of Langevin Monte Carlo from Poincaré to log-Sobolev
S Chewi, MA Erdogdu, MB Li, R Shen, M Zhang
Conference on Learning Theory, 1-2, 2022
1002022
SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence
S Chewi, TL Gouic, C Lu, T Maunu, P Rigollet
Advances in Neural Information Processing Systems 33, 2098-2109, 2020
732020
Optimal dimension dependence of the Metropolis-adjusted Langevin algorithm
S Chewi, C Lu, K Ahn, X Cheng, T Le Gouic, P Rigollet
Conference on Learning Theory, 1260-1300, 2021
712021
Variational inference via Wasserstein gradient flows
M Lambert, S Chewi, F Bach, S Bonnabel, P Rigollet
Advances in Neural Information Processing Systems 35, 14434-14447, 2022
672022
The probability flow ODE is provably fast
S Chen, S Chewi, H Lee, Y Li, J Lu, A Salim
Advances in Neural Information Processing Systems 36, 2024
662024
Towards a theory of non-log-concave sampling: first-order stationarity guarantees for Langevin Monte Carlo
K Balasubramanian, S Chewi, MA Erdogdu, A Salim, M Zhang
Conference on Learning Theory, 2896-2923, 2022
652022
Efficient constrained sampling via the mirror-Langevin algorithm
K Ahn, S Chewi
Advances in Neural Information Processing Systems 34, 28405-28418, 2021
582021
Exponential ergodicity of mirror-Langevin diffusions
S Chewi, TL Gouic, C Lu, T Maunu, P Rigollet, A Stromme
Advances in Neural Information Processing Systems 33, 19573-19585, 2020
522020
Improved analysis for a proximal algorithm for sampling
Y Chen, S Chewi, A Salim, A Wibisono
Conference on Learning Theory, 2984-3014, 2022
502022
Log-concave sampling
S Chewi
Book draft available at https://chewisinho.github.io, 2023
49*2023
Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent
J Altschuler, S Chewi, PR Gerber, A Stromme
Advances in Neural Information Processing Systems 34, 22132-22145, 2021
432021
Dimension-free log-Sobolev inequalities for mixture distributions
HB Chen, S Chewi, J Niles-Weed
Journal of Functional Analysis 281 (11), 109236, 2021
392021
Learning threshold neurons via the "edge of stability"
K Ahn, S Bubeck, S Chewi, YT Lee, F Suarez, Y Zhang
Advances in Neural Information Processing Systems 36, 2022
352022
Faster high-accuracy log-concave sampling via algorithmic warm starts
JM Altschuler, S Chewi
Journal of the ACM 71 (3), 1-55, 2024
332024
Fast and smooth interpolation on Wasserstein space
S Chewi, J Clancy, T Le Gouic, P Rigollet, G Stepaniants, A Stromme
International Conference on Artificial Intelligence and Statistics, 3061-3069, 2021
312021
Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space
MZ Diao, K Balasubramanian, S Chewi, A Salim
International Conference on Machine Learning, 7960-7991, 2023
272023
Improved discretization analysis for underdamped Langevin Monte Carlo
S Zhang, S Chewi, M Li, K Balasubramanian, MA Erdogdu
Conference on Learning Theory, 36-71, 2023
262023
An entropic generalization of Caffarelli’s contraction theorem via covariance inequalities
S Chewi, AA Pooladian
Comptes Rendus. Mathématique 361 (G9), 1471-1482, 2023
262023
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20