Seguir
Hugo Touvron
Hugo Touvron
Facebook AI Research
E-mail confirmado em fb.com
Título
Citado por
Citado por
Ano
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
95432023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
90372023
Training data-efficient image transformers & distillation through attention
H Touvron, M Cord, M Douze, F Massa, A Sablayrolles, H Jégou
International conference on machine learning, 10347-10357, 2021
68742021
Emerging properties in self-supervised vision transformers
M Caron, H Touvron, I Misra, H Jégou, J Mairal, P Bojanowski, A Joulin
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
52162021
Code llama: Open foundation models for code
B Roziere, J Gehring, F Gloeckle, S Sootla, I Gat, XE Tan, Y Adi, J Liu, ...
arXiv preprint arXiv:2308.12950, 2023
11912023
Going deeper with image transformers
H Touvron, M Cord, A Sablayrolles, G Synnaeve, H Jégou
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
10992021
Convit: Improving vision transformers with soft convolutional inductive biases
S d'Ascoli, H Touvron, M Leavitt, A Morcos, G Biroli, L Sagun
International Conference on Machine Learning, 2021
8622021
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
B Graham, A El-Nouby, H Touvron, P Stock, A Joulin, H Jégou, M Douze
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
804*2021
Resmlp: Feedforward networks for image classification with data-efficient training
H Touvron, P Bojanowski, M Caron, M Cord, A El-Nouby, E Grave, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
741*2021
Fixing the train-test resolution discrepancy
H Touvron, A Vedaldi, M Douze, H Jégou
Advances in neural information processing systems 32, 2019
6462019
The Llama 3 Herd of Models
AI Meta
arXiv preprint arXiv:2407.21783, 2024
573*2024
XCiT: Cross-Covariance Image Transformers
A El-Nouby, H Touvron, M Caron, P Bojanowski, M Douze, A Joulin, ...
Advances in Neural Information Processing Systems, 2021
531*2021
Resnet strikes back: An improved training procedure in timm
R Wightman, H Touvron, H Jégou
NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future, 2021
4972021
Deit iii: Revenge of the vit
H Touvron, M Cord, H Jégou
Proceedings of the European conference on computer vision (ECCV), 2022
3482022
Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv: 2307.0928810. 48550
H Touvron
arXiv preprint arXiv.2307.09288, 2023
160*2023
Are large-scale datasets necessary for self-supervised pre-training?
A El-Nouby, G Izacard, H Touvron, I Laptev, H Jegou, E Grave
arXiv preprint arXiv:2112.10740, 2021
1482021
LLaMA: Open and Efficient Foundation Language Models. arXiv [cs. CL]. 2023
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix
121*
Three things everyone should know about Vision Transformers
H Touvron, M Cord, A El-Nouby, J Verbeek, H Jégou
Proceedings of the European conference on computer vision (ECCV), 2022
1082022
Introducing LLaMA: A foundational, 65-billion-parameter large language model
AI Meta
Meta AI, 2023
792023
Grafit: Learning fine-grained image representations with coarse labels
H Touvron, A Sablayrolles, M Douze, M Cord, H Jégou
Proceedings of the IEEE/CVF international conference on computer vision, 874-884, 2021
762021
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20