Takip et
Eric Nalisnick
Eric Nalisnick
Assistant Professor, Johns Hopkins University
jhu.edu üzerinde doğrulanmış e-posta adresine sahip - Ana Sayfa
Başlık
Alıntı yapanlar
Alıntı yapanlar
Yıl
Normalizing flows for probabilistic modeling and inference
G Papamakarios, E Nalisnick, DJ Rezende, S Mohamed, ...
Journal of Machine Learning Research 22 (57), 1-64, 2021
16962021
Do deep generative models know what they don't know?
E Nalisnick, A Matsukawa, YW Teh, D Gorur, B Lakshminarayanan
arXiv preprint arXiv:1810.09136, 2018
8012018
Stick-breaking variational autoencoders
E Nalisnick, P Smyth
arXiv preprint arXiv:1605.06197, 2016
2042016
Improving document ranking with dual word embeddings
E Nalisnick, B Mitra, N Craswell, R Caruana
Proceedings of the 25th international conference companion on world wide web …, 2016
2032016
Detecting out-of-distribution inputs to deep generative models using typicality
E Nalisnick, A Matsukawa, YW Teh, B Lakshminarayanan
arXiv preprint arXiv:1906.02994, 2019
1962019
A dual embedding space model for document ranking
B Mitra, E Nalisnick, N Craswell, R Caruana
arXiv preprint arXiv:1602.01137, 2016
1842016
Bayesian batch active learning as sparse subset approximation
R Pinsler, J Gordon, E Nalisnick, JM Hernández-Lobato
Advances in neural information processing systems 32, 2019
1412019
Character-to-character sentiment analysis in Shakespeare’s plays
ET Nalisnick, HS Baird
Proceedings of the 51st Annual Meeting of the Association for Computational …, 2013
1152013
Approximate inference for deep latent gaussian mixtures
E Nalisnick, L Hertel, P Smyth
NIPS Workshop on Bayesian Deep Learning 2, 131, 2016
1082016
Bayesian deep learning via subnetwork inference
E Daxberger, E Nalisnick, JU Allingham, J Antorán, ...
International Conference on Machine Learning, 2510-2521, 2021
107*2021
Hybrid models with deep and invertible features
E Nalisnick, A Matsukawa, YW Teh, D Gorur, B Lakshminarayanan
International Conference on Machine Learning, 4723-4732, 2019
972019
Dropout as a structured shrinkage prior
E Nalisnick, JM Hernández-Lobato, P Smyth
International Conference on Machine Learning, 4712-4722, 2019
472019
Extracting sentiment networks from Shakespeare's plays
ET Nalisnick, HS Baird
2013 12th International Conference on Document Analysis and Recognition, 758-762, 2013
462013
Calibrated learning to defer with one-vs-all classifiers
R Verma, E Nalisnick
International Conference on Machine Learning, 22184-22202, 2022
432022
Do bayesian neural networks need to be fully stochastic?
M Sharma, S Farquhar, E Nalisnick, T Rainforth
International Conference on Artificial Intelligence and Statistics, 7694-7722, 2023
382023
On priors for Bayesian neural networks
ET Nalisnick
University of California, Irvine, 2018
362018
Learning to defer to multiple experts: Consistent surrogate losses, confidence calibration, and conformal ensembles
R Verma, D Barrejón, E Nalisnick
International Conference on Artificial Intelligence and Statistics, 11415-11434, 2023
262023
Adapting the linearised laplace model evidence for modern deep learning
J Antorán, D Janz, JU Allingham, E Daxberger, RR Barbano, E Nalisnick, ...
International Conference on Machine Learning, 796-821, 2022
262022
A scale mixture perspective of multiplicative noise in neural networks
E Nalisnick, A Anandkumar, P Smyth
arXiv preprint arXiv:1506.03208, 2015
262015
Predictive complexity priors
E Nalisnick, J Gordon, JM Hernández-Lobato
International Conference on Artificial Intelligence and Statistics, 694-702, 2021
252021
Sistem, işlemi şu anda gerçekleştiremiyor. Daha sonra yeniden deneyin.
Makaleler 1–20