Diffusion processes are instrumental to describe the movement of a continuous quantity in a generic network of interacting agents. Here, we present a probabilistic framework for diffusion in networks and study in particular two classes of agent interactions depending on whether the total network quantity follows a conservation law. Focusing on asymmetric interactions between agents, we define how the dynamics of conservative and non-conservative networks relate to the weighted in-degree and out-degree Laplacians. For uncontrolled networks, we define the convergence behavior of our framework, including the case of variable network topologies, as a function of the eigenvalues and eigenvectors of the weighted graph Laplacian. In addition, we study the control of the network dynamics by means of external controls and alterations in the network topology. For networks with exogenous controls, we analyze convergence and provide a method to measure the difference between conservative and non-conservative network dynamics based on the comparison of their respective attainability domains. In order to construct a network topology tailored for a desired behavior, we propose a Markov decision process (MDP) that learns specific network adjustments through a reinforcement learning algorithm. The presented network control and design schemes enable the alteration of the dynamic and stationary network behavior in conservative and non-conservative networks.