matlab - calculation gives me NaN -
I am trying to implement multilateral logistics regression using the gradient lineage, but my cost job is As far as I understand, If I try NaN and she tells me that this cost is closed on the line in the ceremony (bold in the above code): classLevelSummation becomes NaN, even if I remove the large constant value -4444 in the NaN Does the value of the weight tell me what I am doing? Calculates the cost for the gradient lineage, assumes there is a% extra feature in inputX, continuous and weights a class X.
function [weight] = getWeightsUsingGradientDescentMultiNominal (training X, resulting in, iterMax, alpha, weight 0,0, lambda) Weighted weight, weight 0 diagrams are random weight by% return shield dynasty. Detailed description here is found in rows = size (training 1X, 1); cols = size (trainingX, 2) +1; Weight = weight 0; NumOfClasses = Size (weight, 1); Adding one to the input data for constant rules = one (rows, 1); X = [A training X]; % Each column corresponds to one weight, weight is updated according to the column:% together with the CTS function together tempCost = 0; Display (costFunctionMultiNominal (x, resultY, watt)); Plot (1, costFunctionMultiNominal (x, resultY, watt), 'r'); Wait; For N = 1: iterMax% has to do this for all classes, for i = rows in weighths = 1: numOfClasses% first calculates sigma on rows for all x x = sum (1, cols); For I = 1: rows p = -1 * calculation of multiples noamal (x (i, :), weight, j); If the result (i) == jp = 1 + p; End equation = Equation + X (i, :) * p; End weight (j, :) = weight (j, :) - (alpha) * (closing / (- lines) + lambda * weight (jay, :)); End cost = costfunctionmulti nonimal (x, result, weight); Performance (cost); costDiff = tempCost - cost; If I ~ = 0 & amp; Amp; Stomach (costdiff) / cost & lt; = 0.0001 display ('breaking down due to cost!'); break; End tempCost = cost; Plot (i, cost, 'r'); End stop; Finally
Nayen is coming in a lot of exponential terms. I tried to reduce the number of losses (-4444), but there was no benefit.
classLevelSummation = classLevelSummation + Log (XP (InputX (I, :) * Weight (J., '' - 4444) / Negotiator);
log (XP (blah) / two) This is unnecessary for exponent and then log in, these two actions undo each other, and this The call may be more than the floating point range if you remember that
is equal to log (a / b)
log (a) - Log (B) > Code>.
exp has a simple overflow probability that you do an inf instead of a nan, though. You should check the value of
denominatorSum because it also comes from an exponential term.
Comments
Post a Comment