Diffusion processes are a fundamental way to describe the transfer of a continuous quantity in a generic network of interacting agents. In this work, we establish a probabilistic framework for diffusion in networks. In addition, we classify agent interactions according to two protocols where the total network quantity is conserved or variable. For both protocols, we use directed graphs to model asymmetric interactions between agents. Specifically, we define how the dynamics of conservative and non-conservative networks relate to the weighted in-degree and out-degree Laplacians respectively. Our framework enables the addition and subtraction of the considered quantity to and from a set of agents. This allows the framework to accommodate external network control and targeted network design. We show how network diffusion can be externally manipulated by injecting time-varying input functions at individual nodes. Desirable network structures can also be constructed by modifying the dominant diffusion modes. To this purpose, we propose a Markov decision process that learns these network adjustments through a reinforcement learning algorithm, suitable for large networks. The proposed network control and design schemes enable flow modifications that promote the alteration of the dynamic and stationary behavior of the network in conservative and non-conservative networks.