This is the internal function that implements the model proposed by L. X. Wang and J. M.
Mendel. It is used to solve regression task. Users do not need to call it directly,
but just use frbs.learn
and predict
WM(data.train, num.labels, type.mf = "GAUSSIAN",
type.tnorm = "PRODUCT", type.implication.func = "ZADEH",
classification = FALSE, range.data = NULL)
a matrix (\(m \times n\)) of normalized data for the training process, where \(m\) is the number of instances and \(n\) is the number of variables; the last column is the output variable. Note the data must be normalized between 0 and 1.
a matrix (\(1 \times n\)), whose elements represent the number of labels (linguistic terms); \(n\) is the number of variables.
the type of the membership function. See frbs.learn
.
a value which represents the type of t-norm. See inference
.
a value representing type of implication function. Let us consider a rule, \(a \to b\),
DIENES_RESHER
means \((b > 1 - a? b : 1 - a)\).
LUKASIEWICZ
means \((b < a ? 1 - a + b : 1)\).
ZADEH
means \((a < 0.5 || 1 - a > b ? 1 - a : (a < b ? a : b))\).
GOGUEN
means \((a < b ? 1 : b / a)\).
GODEL
means \((a <= b ? 1 : b)\).
SHARP
means \((a <= b ? 1 : 0)\).
MIZUMOTO
means \((1 - a + a * b)\).
DUBOIS_PRADE
means \((b == 0 ? 1 - a : (a == 1 ? b : 1))\).
MIN
means \((a < b ? a : b)\).
a boolean representing whether it is a classification problem or not.
a matrix representing interval of data.
The fuzzy rule-based system for learning from L. X. Wang and J. M. Mendel's paper is implemented in this function. For the learning process, there are four stages as follows:
Step 1:
Divide equally the input and output spaces of the given numerical data into
fuzzy regions as the database. In this case, fuzzy regions refers to intervals for each
linguistic term. Therefore, the length of fuzzy regions represents the number of
linguistic terms. For example, the linguistic term "hot" has the fuzzy region \([1, 3]\).
We can construct a triangular membership function having the corner points
\(a = 1\), \(b = 2\), and \(c = 3\) where \(b\) is a middle point
that its degree of the membership function equals one.
Step 2:
Generate fuzzy IF-THEN rules covering the training data,
using the database from Step 1. First, we calculate degrees of the membership function
for all values in the training data. For each instance in the training data,
we determine a linguistic term having a maximum degree in each variable.
Then, we repeat the process for each instance in the training data to construct
fuzzy rules covering the training data.
Step 3:
Determine a degree for each rule.
Degrees of each rule are determined by aggregating the degree of membership functions in
the antecedent and consequent parts. In this case, we are using the product aggregation operators.
Step 4:
Obtain a final rule base after deleting redundant rules.
Considering degrees of rules, we can delete the redundant rules having lower degrees.
The outcome is a Mamdani model. In the prediction phase, there are four steps: fuzzification, checking the rules, inference, and defuzzification.
L.X. Wang and J.M. Mendel, "Generating fuzzy rule by learning from examples", IEEE Trans. Syst., Man, and Cybern., vol. 22, no. 6, pp. 1414 - 1427 (1992).
frbs.learn
, predict
and frbs.eng
.