The middle square method aims to generate random numbers. This method to generate a random number of N digits requires choosing an arbitrary number of N digits, squaring it, and finally taking the N central digits of the squared number.
Therefore the algorithm is structured in three steps:
This method actually generates a random number but has some defects or appropriate clarifications on the choice of the initial number: the choice of the seed is decisive and this method presents some problems when choosing numbers with equal digits such as 33 or 66 furthermore when a 0 or 00 and so on appears it can no longer generate values so in choosing the seed you must be careful not to choose a number whose square has a zero in the central values
Linear and mixed congruential methods are methods for generating random numbers based on the idea that the remainder of a division can be considered as random.
\[ y = x \mod m \] The purpose of these methods is to create a sequence of random numbers through an algorithm that adds, subtracts, multiplies or divides values from a number and then calculates the remainder of the division: \[ x_1 = (a * x_0 + b) \mod m \\ x_2 = (a * x_1 + b) \mod m \\ ... \\ x_n = (a * x_{n-1} + b) \mod m \] However, appropriate considerations must be made on the choice of coefficients: - m must be a very large prime number - c and m must be coprime - a should be in the form \(2^r + 1 \space \space \space s.t. \space \space \space r \geq 2\)
Random number generation is nothing more than generating values from a uniform distribution since in this distribution all values are independent and have the same probability; therefore to test a random number generation algorithm you can do an empirical test to verify uniformity and a chi-square test to verify independence
To generate a realization from a random variable \(Y\) with distribution function \(F_y(y)\) it is necessary to identify a transformation of one or more independent random variables \(U(0,1)\) such that the relation \(y=f(U)\) holds, to find this transformation we calculate the inverse distribution function of \(Y\) for \(U\):
\[F_y(y) = u \rightarrow Y = F_y^{-1}(u)\]
Some random variables can be constructed, starting from their definition, as a function of random variables with "simpler" distribution, think for example of the chi-square which is the square of a normal.
The acceptance-rejection algorithm was realized in 1951 by Von Neuman and states that, when you want to generate values from a random variable \(Y\) with "complicated" density function and suppose you can only generate from a "simple" random variable \(X\) with density \(g_x(x)\) and from a random variable \(U(0,1)\) and such that the support of \(Y\) is contained in that of \(g_x( . )\) then you can generate from \(g_x(x)\) random values of \(Y\) if and only if, for \(n\gg0\), the \(n\) values generated from a \(U(0,1)\) multiplied by a coefficient \(b\) and by the \(n\) values generated from the random variable \(X\) through \(g_x(x)\) are less than or equal to the \(n\) values generated from the random variable \(X\) inserted within the density function of \(Y\), that is:
\(n \gg 0\)
\(i = 1, ..., n\)
\(u_i \sim U(0,1)\)
\(x_i \sim g_x(x)\)
\[u_i \space \space b \space \space x_i \leq f_y(x_i)\]
where \(b\) corresponds to the maximum \(y\) of the ratio between \(f_y(y)\) and \(g_x(y)\) that is:
\[ b= \max_{y}{\frac{f_y(y)}{g_x(y)}} \]
This method exploits the central limit theorem to generate normal values from a uniform \(U(0,1)\):
\(u_i \sim U(0,1)\) with \(i = 1, ..., n\)
\[ \frac{\frac{\sum u_i}{n}- E[U]}{\frac{Var(U)}{\sqrt n}} \sim N(0,1)\]
In particular assuming: \(n=12\) generations from a \(U(0,1)\) results:
\[ E[U]=\frac{1}{2} \\ Var(U)=\frac{1}{12} \\ therefore: \\ \rightarrow \sum u_i - 6 \sim N(0,1) \]
Method to generate from a bivariate normal with independent components, this algorithm is structured in 4 phases:
Where \(Z_1, Z_2\) are two independent \(N(0,1)\).
Method to generate from a bivariate normal with independent components, this algorithm is structured in 4 phases:
Where \(Z_1, Z_2\) are two independent \(N(0,1)\).