$ReLU$ (Assuming it as a simple ReLU, not a leaky ReLU) is a linear activation function and defined as $\sigma(z)=\max(0,z)$ and so, if $z>0$ then $\sigma(z)=z$
Since, for the given neural network, $x_1,x_2,x_3 >0$ and weights are positive as well as non-zero. So, $ReLU(wx)=wx$ and so, we can ignore it.
Here, biases are given as zero.
$\textbf{Method 1:}$
For $1^{st}$ hidden layer say, $h_1:$
Here, neurons are labeled as say $h_{11},h_{12},h_{13}$
$h_{11}=1*x_1+1*x_2 =x_1+x_2$
$h_{12}=1*x_1+1*x_3 =x_1+x_3$
$h_{13}=1*x_1+1*x_2+1*x_3 =x_1+x_2+x_3$
For $2^{nd}$ hidden layer say, $h_2:$
Here, neurons are labeled as say $h_{21},h_{22}$
$h_{21}=h_{22}=2*h_{11}+2*h_{12}+2*h_{13} =2(3x_1+2x_2+2x_3)$
For the output layer:
$\hat{y}=3*h_{21}+3*h_{22}=3(h_{21}+h_{22})=12(3x_1+2x_2+2x_3)$
$\hat{y}=36x_1+24x_2+24x_3$
Hence, when these hidden layers are not present then the resultant artificial neural network is equivalent to the artificial neural network with weights $p=36,q=24,r=24$
So, Answer is $p=36,q=24,r=24$.
$\textbf{Method 2: Using Matrix Operations:}$
It will work for fully connected network but I will update later how to apply here.