Abstract:
Measures of variation are statistical measures which assist in describing the distribution of data set. These
measures are either used separately or together to give a wide variety of ways of measuring variability of
data. Researchers and mathematicians found out that these measures were not perfect, they violated the
algebraic laws and they possessed some weakness that they could not ignore. As a result of these facts, a new
measure of variation known as geometric measure of variation was formulated. The new measure of variation
was able to overcome all the weaknesses of the already existing measures. It obeyed all the algebraic laws,
allowed further algebraic manipulation and was not affected by outliers or skewed data sets. Researchers
were also able to determine that geometric measure was more efficient than standard deviation and that its
estimates were always smaller than those of standard deviation but they did not determine their main
relationship and how the sample characteristics affect the minimum difference between geometric measure and standard deviation. The main aim of this study was to empirically determine the ratio factor between
standard deviation and geometric measure and specifically how certain variable such as sample size, outliers
and geometric measure affects the minimum difference between geometric measure and standard deviation.
Data simulation was the concept that was used to achieve the studies objectives. The samples were simulated
individually under four different types of distributions which were normal, Poisson, Chi-square and Bernoulli
distribution. A Hierarchical linear regression model was fitted on the normal, skewed, binary and countable
data sets and results were obtained. Based on the results obtained, there is always a positive significant ratio
factor between the geometric measure and standard deviation in all types of data sets. The ratio factor was
influenced by the existence of outliers and sample size. The existence of outliers increased the difference
between the geometric measure and standard deviation in skewed and countable data sets while in binary it
decreased the difference between the standard deviation and geometric measure. For normal and binary data
sets, increase in sample size did not have any significant effect on the difference between geometric measure
and standard deviation but for skewed and countable data sets the increase in sample size decreased the
difference between geometric measure and standard deviation.
Keywords: Geometric; standard deviation; Data simulation; distribution; hierarchical regression.