This article proposes large-scale convex optimization problems to be solved via saddle points of the standard Lagrangian. A recent approach for saddle point computation is specialized, by way of a specific perturbation technique and unique scaling method, to convex optimization problems with differentiable objective and constraint functions. In each iteration the update directions for primal and dual variables are determined by gradients of the Lagrangian. These gradients are evaluated at perturbed points which are generated from current points via auxiliary mappings. The resulting algorithm suits massively parallel computing. Sparsity can be exploited efficiently. Employing simulation of parallel computations, an experimental code embedded into GAMS is tested on two sets of nonlinear problems. The first set arises from multi-stage stochastic optimization of the US energy economy. The second set consists of multi-currency bond portfolio problems. In such stochastic optimization problems the serial time appears approximatively proportional to the number of scenarios, while the parallel time seems independent of the number of scenarios. Thus, we observe that the serial time of our approach in comparison with Minos increases slower with the problem size. Consequently, for large problems with reasonable precision requirements, our method appears faster than Minos even in a serial computer.