This paper systematically surveys the basic direction of development of stochastic quasigradient methods which allow one to solve optimization problems without calculating the precise values of objective and constraints function (all the more of their derivatives). For deterministic nonlinear optimization problems these methods can be regarded as methods of random search. For the stochastic programming problems, SQG methods generalize the well-known stochastic approximation method for unconstrained optimization of the expectation of random functions to problems involving general constraints.