Many optimization problems over function spaces (density estimation, optimal transport, state-constrained linear optimal control,…) involve an infinite number of pointwise inequality constraints. On the other hand, reproducing kernels are propitious for pointwise evaluations and some kernels encode very rich classes of functions, suitable to approximate the function spaces of interest. However, representer theorems, which ensure the numerical applicability of kernels, cannot be applied for an infinite number of evaluations. Through constructive algebraic and geometric arguments, I will present how to tackle this question by perturbing the constraints, through coverings in infinite dimensions and through kernel sum-of-squares. Both involve an extra computational price, involving second-order conic or SDP constraints, but assessing the amount of perturbation enables to prove rates on the convergence of the scheme.