Riccardo Zecchina

Affiliations: 
ICTP, Trieste, Grignano, Friuli-Venezia Giulia, Italy 
Area:
Statistical Physics, message-passing algorithms
Google:
"Riccardo Zecchina"
Mean distance: 16.27 (cluster 17)
 
Cross-listing: Computational Biology Tree

BETA: Related publications

Publications

You can help our author matching system! If you notice any publications incorrectly attributed to this author, please sign in and mark matches as correct or incorrect.

Baldassi C, Pittorino F, Zecchina R. (2019) Shaping the learning landscape in neural networks around wide flat minima. Proceedings of the National Academy of Sciences of the United States of America
Baldassi C, Malatesta EM, Zecchina R. (2019) Properties of the Geometry of Solutions and Capacity of Multilayer Neural Networks with Rectified Linear Unit Activations. Physical Review Letters. 123: 170602
Saglietti L, Gerace F, Ingrosso A, et al. (2018) From statistical inference to a differential learning rule for stochastic neural networks. Interface Focus. 8: 20180033
Baldassi C, Gerace F, Kappen HJ, et al. (2018) Role of Synaptic Stochasticity in Training Low-Precision Neural Networks. Physical Review Letters. 120: 268103
Baldassi C, Zecchina R. (2018) Efficiency of quantum vs. classical annealing in nonconvex learning problems. Proceedings of the National Academy of Sciences of the United States of America
Bosia C, Sgrò F, Conti L, et al. (2017) RNAs competing for microRNAs mutually influence their fluctuations in a highly non-linear microRNA-dependent manner in single cells. Genome Biology. 18: 37
Baldassi C, Borgs C, Chayes JT, et al. (2016) Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes. Proceedings of the National Academy of Sciences of the United States of America
Baldassi C, Gerace F, Lucibello C, et al. (2016) Learning may need only a few bits of synaptic precision. Physical Review. E. 93: 052313
Baldassi C, Ingrosso A, Lucibello C, et al. (2015) Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses. Physical Review Letters. 115: 128101
Alemi A, Baldassi C, Brunel N, et al. (2015) A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks. Plos Computational Biology. 11: e1004439
See more...