$$\sigma = variance(R_{a}) , std=\sqrt{\sigma} $$
It should follow that the variance is not a linear function of the number of
observations. To determine possible variance over multiple periods, it
wouldn't make sense to multiply the single-period variance by the total
number of periods: this could quickly lead to an absurd result where total
variance (or risk) was greater than 100
variance needs to demonstrate a decreasing period-to-period increase as the
number of periods increases. Put another way, the increase in incremental
variance per additional period needs to decrease with some relationship to
the number of periods. The standard accepted practice for doing this is to
apply the inverse square law. To normalize standard deviation across
multiple periods, we multiply by the square root of the number of periods we
wish to calculate over. To annualize standard deviation, we multiply by the
square root of the number of periods per year.
$$\sqrt{\sigma}\cdot\sqrt{periods}$$
Note that any multiperiod or annualized number should be viewed with
suspicion if the number of observations is small.