Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of first-order gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TV-regularized image denoising, and sensor network localization. Key words. Convex optimization, compressed sensing, ℓ1-regularization, nuclear/trace norm, regression, variable selection, sensor network localization, approximation accuracy, proximal gradient method, error bound, linear convergence.