This paper uses abstract optimization theory to characterize and analyze the stochastic process describing the current marginal expected value of perfect information in a class of discrete time dynamic stochastic optimization problems which include the familiar optimal control problem with an infinite planning horizon. Using abstract Lagrange multiplier techniques on the usual nonanticipativity constraints treated explicitly in terms of adaptation of the decision sequence, it is shown that the marginal expected value of perfect information is a nonanticipative supermartingale. For a given problem, the statistics of this process are of fundamental practical importance in deciding the necessity for continuing to take account of the stochastic variation in the evolution of the sequence of optimal decisions.