Mathematical notions of privacy, such as differential privacy, are often
stated as probabilistic guarantees that are difficult to interpret. It is
imperative, however, that the implications of data sharing be effectively
communicated to the data principal to ensure informed decision-making and offer
full transparency with regards to the associated privacy risks. To this end,
our work presents a rigorous quantitative evaluation of the protection
conferred by private learners by investigating their resilience to training
data reconstruction attacks. We accomplish this by deriving non-asymptotic
lower bounds on the reconstruction error incurred by any adversary against
$(epsilon, delta)$ differentially private learners for target samples that
belong to any compact metric space. Working with a generalization of
differential privacy, termed metric privacy, we remove boundedness assumptions
on the input space prevalent in prior work, and prove that our results hold for
general locally compact metric spaces. We extend the analysis to cover the high
dimensional regime, wherein, the input data dimensionality may be larger than
the adversary’s query budget, and demonstrate that our bounds are minimax
optimal under certain regimes.