Seguir
Sang Michael Xie
Título
Citado por
Citado por
Ano
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
25792021
Combining satellite imagery and machine learning to predict poverty
N Jean, M Burke, M Xie, WM Davis, DB Lobell, S Ermon
Science 353 (6301), 790-794, 2016
16682016
Wilds: A benchmark of in-the-wild distribution shifts
PW Koh, S Sagawa, H Marklund, SM Xie, M Zhang, A Balsubramani, ...
International conference on machine learning, 5637-5664, 2021
11422021
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
626*2022
Transfer learning from deep features for remote sensing and poverty mapping
M Xie, N Jean, M Burke, D Lobell, S Ermon
AAAI, 2016
4962016
An Explanation of In-context Learning as Implicit Bayesian Inference
SM Xie, A Raghunathan, P Liang, T Ma
International Conference on Learning Representations (ICLR), 2022
3812022
Adversarial training can hurt generalization
A Raghunathan*, SM Xie*, F Yang, JC Duchi, P Liang
arXiv preprint arXiv:1906.06032, 2019
2472019
Weakly supervised deep learning for segmentation of remote sensing imagery
S Wang, W Chen, SM Xie, G Azzari, DB Lobell
Remote Sensing 12 (2), 207, 2020
2082020
Understanding and mitigating the tradeoff between robustness and accuracy
A Raghunathan*, SM Xie*, F Yang, J Duchi, P Liang
International Conference on Machine Learning (ICML), 2020
2072020
Reward design with language models
M Kwon, SM Xie, K Bullard, D Sadigh
arXiv preprint arXiv:2303.00001, 2023
942023
Extending the wilds benchmark for unsupervised adaptation
S Sagawa, PW Koh, T Lee, I Gao, SM Xie, K Shen, A Kumar, W Hu, ...
arXiv preprint arXiv:2112.05090, 2021
932021
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance
N Jean*, SM Xie*, S Ermon
Advances in Neural Information Processing Systems (NeurIPS), 2018
912018
Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation
K Shen*, R Jones*, A Kumar*, SM Xie*, JZ HaoChen, T Ma, P Liang
arXiv preprint arXiv:2204.00570, 2022
812022
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
C Wei, SM Xie, T Ma
Neural Information Processing Systems (NeurIPS), 2021
742021
Reparameterizable Subset Sampling via Continuous Relaxations
SM Xie, S Ermon
IJCAI, 2019
742019
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
SM Xie*, A Kumar*, R Jones*, F Khani, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2022
512022
Data selection for language models via importance resampling
SM Xie, S Santurkar, T Ma, PS Liang
Advances in Neural Information Processing Systems 36, 2024
452024
Doremi: Optimizing data mixtures speeds up language model pretraining
SM Xie, H Pham, X Dong, N Du, H Liu, Y Lu, PS Liang, QV Le, T Ma, ...
Advances in Neural Information Processing Systems 36, 2024
362024
Same pre-training loss, better downstream: Implicit bias matters for language models
H Liu, SM Xie, Z Li, T Ma
International Conference on Machine Learning, 22188-22214, 2023
212023
No true state-of-the-art? ood detection methods are inconsistent across datasets
F Tajwar, A Kumar, SM Xie, P Liang
arXiv preprint arXiv:2109.05554, 2021
162021
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20