Entropy bounds on Bayesian learning

Abstract : An observer of a process View the MathML source believes the process is governed by Q whereas the true law is P. We bound the expected average distance between P(xt|x1,...,xt−1) and Q(xt|x1,...,xt−1) for t=1,...,n by a function of the relative entropy between the marginals of P and Q on the n first realizations. We apply this bound to the cost of learning in sequential decision problems and to the merging of Q to P.
Document type :
Journal articles
Complete list of metadatas

https://hal-pjse.archives-ouvertes.fr/halshs-00754314
Contributor : Caroline Bauer <>
Submitted on : Tuesday, November 20, 2012 - 8:54:28 AM
Last modification on : Tuesday, April 24, 2018 - 5:20:09 PM

Links full text

Identifiers

Collections

Citation

Olivier Gossner, Tristan Tomala. Entropy bounds on Bayesian learning. Journal of Mathematical Economics, Elsevier, 2008, 44 (1), pp.24-32. ⟨10.1016/j.jmateco.2007.04.006⟩. ⟨halshs-00754314⟩

Share

Metrics

Record views

362