Seguir
Sang Michael Xie
Sang Michael Xie
PhD Candidate, Stanford University
E-mail confirmado em cs.stanford.edu - Página inicial
Título
Citado por
Citado por
Ano
Combining satellite imagery and machine learning to predict poverty
N Jean, M Burke, M Xie, WM Davis, DB Lobell, S Ermon
Science 353 (6301), 790-794, 2016
11542016
Transfer learning from deep features for remote sensing and poverty mapping
M Xie, N Jean, M Burke, D Lobell, S Ermon
AAAI, 2016
3622016
Wilds: A benchmark of in-the-wild distribution shifts
PW Koh, S Sagawa, H Marklund, SM Xie, M Zhang, A Balsubramani, ...
International Conference on Machine Learning, 5637-5664, 2021
2452021
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
1292021
Adversarial training can hurt generalization
A Raghunathan*, SM Xie*, F Yang, JC Duchi, P Liang
arXiv preprint arXiv:1906.06032, 2019
1142019
Weakly supervised deep learning for segmentation of remote sensing imagery
S Wang, W Chen, SM Xie, G Azzari, DB Lobell
Remote Sensing 12 (2), 207, 2020
962020
Understanding and mitigating the tradeoff between robustness and accuracy
A Raghunathan*, SM Xie*, F Yang, J Duchi, P Liang
International Conference on Machine Learning (ICML), 2020
922020
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance
N Jean*, SM Xie*, S Ermon
Advances in Neural Information Processing Systems (NeurIPS), 2018
61*2018
Reparameterizable Subset Sampling via Continuous Relaxations
SM Xie, S Ermon
IJCAI, 2019
34*2019
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
SM Xie*, A Kumar*, R Jones*, F Khani, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2021
162021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
C Wei, SM Xie, T Ma
Neural Information Processing Systems (NeurIPS), 2021
122021
Extending the WILDS Benchmark for Unsupervised Adaptation
S Sagawa, PW Koh, T Lee, I Gao, SM Xie, K Shen, A Kumar, W Hu, ...
arXiv preprint arXiv:2112.05090, 2021
72021
Incorporating spatial context and fine-grained detail from satellite imagery to predict poverty
JH Kim, M Xie, N Jean, S Ermon
Working paper, Stanford University, 2016
62016
An Explanation of In-context Learning as Implicit Bayesian Inference
SM Xie, A Raghunathan, P Liang, T Ma
International Conference on Learning Representations (ICLR), 2022
42022
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization
SM Xie, T Ma, P Liang
International Conference on Machine Learning (ICML), 2021
22021
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
K Shen, R Jones, A Kumar, SM Xie, JZ HaoChen, T Ma, P Liang
arXiv preprint arXiv:2204.00570, 2022
12022
How Does Contrastive Pre-training Connect Disparate Domains?
K Shen, RM Jones, A Kumar, SM Xie, P Liang
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
12021
Ensembles and Cocktails: Robust Finetuning for Natural Language Generation
J Hewitt, XL Li, SM Xie, B Newman, P Liang
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
2021
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
F Tajwar, A Kumar, SM Xie, P Liang
arXiv preprint arXiv:2109.05554, 2021
2021
Automated detection of skin reactions in epicutaneous patch testing using machine learning
WH Chan, R Srivastava, N Damaraju, H Do, G Burnett, J MacFarlane, ...
The British Journal of Dermatology, 2021
2021
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20