Options
All
  • Public
  • Public/Protected
  • All
Menu

@modelx/data

Index

Namespaces

Classes

Variables

Variables

Const calc

calc: { assocationRuleLearning: (transactions?: never[], options: any) => Promise<unknown>; getTransactions: (data: any[], options: any) => { transactions: any[][]; values: Set<unknown>; valuesMap: Map<any, any> } } = ...

Type declaration

  • assocationRuleLearning: (transactions?: never[], options: any) => Promise<unknown>
      • (transactions?: never[], options: any): Promise<unknown>
      • returns association rule learning results

        see

        https://github.com/alexisfacques/Node-FPGrowth

        Parameters

        • transactions: never[] = []

          sparse matrix of transactions

        • options: any

        Returns Promise<unknown>

        Returns the result from Node-FPGrowth or a summary of support and strong associations

  • getTransactions: (data: any[], options: any) => { transactions: any[][]; values: Set<unknown>; valuesMap: Map<any, any> }
      • (data: any[], options: any): { transactions: any[][]; values: Set<unknown>; valuesMap: Map<any, any> }
      • Formats an array of transactions into a sparse matrix like format for Apriori/Eclat

        see

        https://github.com/alexisfacques/Node-FPGrowth

        Parameters

        • data: any[]

          CSV data of transactions

        • options: any

        Returns { transactions: any[][]; values: Set<unknown>; valuesMap: Map<any, any> }

        {values - unique list of all values, valuesMap - map of values and labels, transactions - formatted sparse array}

        • transactions: any[][]
        • values: Set<unknown>
        • valuesMap: Map<any, any>

Const cross_validation

cross_validation: { GridSearch: any; cross_validate_score: (options?: {}) => any[]; cross_validation_split: (dataset?: never[], options?: { folds: number; random_state: number }) => Data[]; grid_search: (options?: any) => any; kfolds: (dataset?: never[], options?: { folds: number; random_state: number }) => Data[]; train_test_split: (dataset?: Data, options?: TrainTestSplitOptions) => Data[] | { test: Data; train: Data } } = ...

Type declaration

  • GridSearch: any
  • cross_validate_score: (options?: {}) => any[]
      • (options?: {}): any[]
      • Used to test variance and bias of a prediction

        memberof

        cross_validation

        Parameters

        • options: {} = {}

        Returns any[]

        Array of accucracy calculations

  • cross_validation_split: (dataset?: never[], options?: { folds: number; random_state: number }) => Data[]
      • (dataset?: never[], options?: { folds: number; random_state: number }): Data[]
      • Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds. Each fold is then used once as a validation while the k - 1 remaining folds form the training set.

        memberof

        cross_validation

        example

        const testArray = [20, 25, 10, 33, 50, 42, 19, 34, 90, 23, ]; // [ [ 50, 20, 34, 33, 10 ], [ 23, 90, 42, 19, 25 ] ] const crossValidationArrayKFolds = ms.cross_validation.cross_validation_split(testArray, { folds: 2, random_state: 0, });

        Parameters

        • dataset: never[] = []

          array of data to split

        • options: { folds: number; random_state: number } = ...
          • folds: number
          • random_state: number

        Returns Data[]

        returns dataset split into k consecutive folds

  • grid_search: (options?: any) => any
      • (options?: any): any
      • Used to test variance and bias of a prediction with parameter tuning

        memberof

        cross_validation

        Parameters

        • options: any = {}

        Returns any

        Array of accucracy calculations

  • kfolds: (dataset?: never[], options?: { folds: number; random_state: number }) => Data[]
      • (dataset?: never[], options?: { folds: number; random_state: number }): Data[]
      • Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds. Each fold is then used once as a validation while the k - 1 remaining folds form the training set.

        memberof

        cross_validation

        example

        const testArray = [20, 25, 10, 33, 50, 42, 19, 34, 90, 23, ]; // [ [ 50, 20, 34, 33, 10 ], [ 23, 90, 42, 19, 25 ] ] const crossValidationArrayKFolds = ms.cross_validation.cross_validation_split(testArray, { folds: 2, random_state: 0, });

        Parameters

        • dataset: never[] = []

          array of data to split

        • options: { folds: number; random_state: number } = ...
          • folds: number
          • random_state: number

        Returns Data[]

        returns dataset split into k consecutive folds

  • train_test_split: (dataset?: Data, options?: TrainTestSplitOptions) => Data[] | { test: Data; train: Data }
      • (dataset?: Data, options?: TrainTestSplitOptions): Data[] | { test: Data; train: Data }
      • Split arrays into random train and test subsets

        memberof

        cross_validation

        example

        const testArray = [20, 25, 10, 33, 50, 42, 19, 34, 90, 23, ]; // { train: [ 50, 20, 34, 33, 10, 23, 90, 42 ], test: [ 25, 19 ] } const trainTestSplit = ms.cross_validation.train_test_split(testArray,{ test_size:0.2, random_state: 0, });

        Parameters

        • dataset: Data = []

          array of data to split

        • options: TrainTestSplitOptions = ...

        Returns Data[] | { test: Data; train: Data }

        returns training and test arrays either as an object or arrays

Const csv

csv: __module = ...

Const nlp

nlp: any = ...

Const util

util: { ExpScaler: (z: number[]) => number[]; LogScaler: (z: number[]) => number[]; MAD: (actuals: number[], estimates: number[]) => number; MADMeanRatio: (actuals: number[], estimates: number[]) => number; MAPE: (actuals: number[], estimates: number[]) => number; MFE: (actuals: number[], estimates: number[]) => number; MMR: (actuals: number[], estimates: number[]) => number; MSE: (actuals: number[], estimates: number[]) => number; MinMaxScaler: (z: number[]) => number[]; MinMaxScalerTransforms: (vector?: never[], nan_value?: number, return_nan?: boolean, inputComponents?: InputComponents) => ScaledTransforms; StandardScaler: (z: number[]) => number[]; StandardScalerTransforms: (vector?: never[], nan_value?: number, return_nan?: boolean, inputComponents?: InputComponents) => ScaledTransforms; TS: (actuals: number[], estimates: number[]) => number; adjustedCoefficentOfDetermination: (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }) => number; adjustedRSquared: (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }) => number; approximateZPercentile: (z: number, alpha?: boolean) => number; avg: ArrayCalculation; coefficientOfCorrelation: (actuals?: number[], estimates?: number[]) => number; coefficientOfDetermination: (actuals?: number[], estimates?: number[]) => number; forecastErrors: (actuals: number[], estimates: number[]) => number[]; getSafePropertyName: (name: string) => string; max: ArraySort; mean: ArrayCalculation; meanAbsoluteDeviation: (actuals: number[], estimates: number[]) => number; meanAbsolutePercentageError: (actuals: number[], estimates: number[]) => number; meanForecastError: (actuals: number[], estimates: number[]) => number; meanSquaredError: (actuals: number[], estimates: number[]) => number; min: ArraySort; pivotArrays: (arrays?: Matrix) => Matrix; pivotVector: (vectors?: any[][]) => Matrix; r: (actuals?: number[], estimates?: number[]) => number; rBarSquared: (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }) => number; rSquared: (actuals?: number[], estimates?: number[]) => number; range: { (start: number, end?: number, step?: number): number[]; (end: number, index: string | number, guard: object): number[] }; rangeRight: { (start: number, end?: number, step?: number): number[]; (end: number, index: string | number, guard: object): number[] }; scale: (a: number[], d: number) => number[]; sd: ArrayCalculation; squaredDifference: (left: number[], right: number[]) => number[]; standardError: (actuals?: number[], estimates?: number[]) => number; standardScore: (observations?: number[]) => number[]; sum: ArrayCalculation; trackingSignal: (actuals: number[], estimates: number[]) => number; zScore: (observations?: number[]) => number[] } = ...
namespace

Type declaration

  • ExpScaler: (z: number[]) => number[]
      • (z: number[]): number[]
      • Parameters

        • z: number[]

        Returns number[]

  • LogScaler: (z: number[]) => number[]
      • (z: number[]): number[]
      • Parameters

        • z: number[]

        Returns number[]

  • MAD: (actuals: number[], estimates: number[]) => number
  • MADMeanRatio: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
      • MAD over Mean Ratio - The MAD/Mean ratio is an alternative to the MAPE that is better suited to intermittent and low-volume data. As stated previously, percentage errors cannot be calculated when the actual equals zero and can take on extreme values when dealing with low-volume data. These issues become magnified when you start to average MAPEs over multiple time series. The MAD/Mean ratio tries to overcome this problem by dividing the MAD by the Mean—essentially rescaling the error to make it comparable across time series of varying scales

        memberof

        util

        see

        https://www.forecastpro.com/Trends/forecasting101August2011.html

        example

        const actuals = [ 45, 38, 43, 39 ]; const estimates = [ 41, 43, 41, 42 ]; const MMR = ms.util.MADMeanRatio(actuals, estimates); MAPE.toFixed(2) // => 0.08

        Parameters

        • actuals: number[]

          numerical samples

        • estimates: number[]

          estimates values

        Returns number

        MMR

  • MAPE: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
      • MAPE (Mean Absolute Percent Error) measures the size of the error in percentage terms

        memberof

        util

        see

        https://www.forecastpro.com/Trends/forecasting101August2011.html

        example

        const actuals = [ 45, 38, 43, 39 ]; const estimates = [ 41, 43, 41, 42 ]; const MAPE = ms.util.meanAbsolutePercentageError(actuals, estimates); MAPE.toFixed(2) // => 0.86

        Parameters

        • actuals: number[]

          numerical samples

        • estimates: number[]

          estimates values

        Returns number

        MAPE

  • MFE: (actuals: number[], estimates: number[]) => number
  • MMR: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
      • MAD over Mean Ratio - The MAD/Mean ratio is an alternative to the MAPE that is better suited to intermittent and low-volume data. As stated previously, percentage errors cannot be calculated when the actual equals zero and can take on extreme values when dealing with low-volume data. These issues become magnified when you start to average MAPEs over multiple time series. The MAD/Mean ratio tries to overcome this problem by dividing the MAD by the Mean—essentially rescaling the error to make it comparable across time series of varying scales

        memberof

        util

        see

        https://www.forecastpro.com/Trends/forecasting101August2011.html

        example

        const actuals = [ 45, 38, 43, 39 ]; const estimates = [ 41, 43, 41, 42 ]; const MMR = ms.util.MADMeanRatio(actuals, estimates); MAPE.toFixed(2) // => 0.08

        Parameters

        • actuals: number[]

          numerical samples

        • estimates: number[]

          estimates values

        Returns number

        MMR

  • MSE: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
      • The standard error of the estimate is a measure of the accuracy of predictions made with a regression line. Compares the estimate to the actual value

        memberof

        util

        see

        http://onlinestatbook.com/2/regression/accuracy.html

        example

        const actuals = [ 45, 38, 43, 39 ]; const estimates = [ 41, 43, 41, 42 ]; const MSE = ms.util.meanSquaredError(actuals, estimates); // => 13.5

        Parameters

        • actuals: number[]

          numerical samples

        • estimates: number[]

          estimates values

        Returns number

        MSE

  • MinMaxScaler: (z: number[]) => number[]
      • (z: number[]): number[]
      • Transforms features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, i.e. between zero and one.

        memberof

        util

        Parameters

        • z: number[]

          array of integers or floats

        Returns number[]

  • MinMaxScalerTransforms: (vector?: never[], nan_value?: number, return_nan?: boolean, inputComponents?: InputComponents) => ScaledTransforms
      • (vector?: never[], nan_value?: number, return_nan?: boolean, inputComponents?: InputComponents): ScaledTransforms
      • This function returns two functions that can mix max scale new inputs and reverse scale new outputs

        Parameters

        • vector: never[] = []
        • nan_value: number = -1
        • return_nan: boolean = false
        • inputComponents: InputComponents = {}

        Returns ScaledTransforms

        • {scale[ Function ], descale[ Function ]}
  • StandardScaler: (z: number[]) => number[]
      • (z: number[]): number[]
      • Standardize features by removing the mean and scaling to unit variance

        Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using the transform method.

        Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual feature do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance)

        memberof

        util

        Parameters

        • z: number[]

          array of integers or floats

        Returns number[]

  • StandardScalerTransforms: (vector?: never[], nan_value?: number, return_nan?: boolean, inputComponents?: InputComponents) => ScaledTransforms
      • (vector?: never[], nan_value?: number, return_nan?: boolean, inputComponents?: InputComponents): ScaledTransforms
      • This function returns two functions that can standard scale new inputs and reverse scale new outputs

        Parameters

        • vector: never[] = []
        • nan_value: number = -1
        • return_nan: boolean = false
        • inputComponents: InputComponents = {}

        Returns ScaledTransforms

        • {scale[ Function ], descale[ Function ]}
  • TS: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
  • adjustedCoefficentOfDetermination: (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }) => number
      • (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }): number
      • You can use the adjusted coefficient of determination to determine how well a multiple regression equation “fits” the sample data. The adjusted coefficient of determination is closely related to the coefficient of determination (also known as R2) that you use to test the results of a simple regression equation.

        example

        const adjr2 = ms.util.adjustedCoefficentOfDetermination({ rSquared: 0.944346527, sampleSize: 8, independentVariables: 2, }); r2.toFixed(3) // => 0.922

        memberof

        util

        see

        http://www.dummies.com/education/math/business-statistics/how-to-calculate-the-adjusted-coefficient-of-determination/

        Parameters

        • options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }
          • actuals: number[]
          • estimates: number[]
          • independentVariables: number

            the number of independent variables in the regression equation

          • rSquared: number
          • sampleSize: number

        Returns number

        adjusted r^2 for multiple linear regression

  • adjustedRSquared: (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }) => number
      • (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }): number
      • You can use the adjusted coefficient of determination to determine how well a multiple regression equation “fits” the sample data. The adjusted coefficient of determination is closely related to the coefficient of determination (also known as R2) that you use to test the results of a simple regression equation.

        example

        const adjr2 = ms.util.adjustedCoefficentOfDetermination({ rSquared: 0.944346527, sampleSize: 8, independentVariables: 2, }); r2.toFixed(3) // => 0.922

        memberof

        util

        see

        http://www.dummies.com/education/math/business-statistics/how-to-calculate-the-adjusted-coefficient-of-determination/

        Parameters

        • options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }
          • actuals: number[]
          • estimates: number[]
          • independentVariables: number

            the number of independent variables in the regression equation

          • rSquared: number
          • sampleSize: number

        Returns number

        adjusted r^2 for multiple linear regression

  • approximateZPercentile: (z: number, alpha?: boolean) => number
  • avg: ArrayCalculation
  • coefficientOfCorrelation: (actuals?: number[], estimates?: number[]) => number
      • (actuals?: number[], estimates?: number[]): number
      • The coefficent of Correlation is given by R decides how well the given data fits a line or a curve.

        example

        const actuals = [ 39, 42, 67, 76, ]; const estimates = [ 44, 40, 60, 84, ]; const R = ms.util.coefficientOfCorrelation(actuals, estimates); R.toFixed(4) // => 0.9408

        memberof

        util

        see

        https://calculator.tutorvista.com/r-squared-calculator.html

        Parameters

        • actuals: number[] = []

          numerical samples

        • estimates: number[] = []

          estimates values

        Returns number

        R

  • coefficientOfDetermination: (actuals?: number[], estimates?: number[]) => number
      • (actuals?: number[], estimates?: number[]): number
      • In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variance in the dependent variable that is predictable from the independent variable(s). Compares distance of estimated values to the mean. {\bar {y}}={\frac {1}{n}}\sum {i=1}^{n}y{i}

        example

        const actuals = [ 2, 4, 5, 4, 5, ]; const estimates = [ 2.8, 3.4, 4, 4.6, 5.2, ]; const r2 = ms.util.coefficientOfDetermination(actuals, estimates); r2.toFixed(1) // => 0.6

        memberof

        util

        see

        https://en.wikipedia.org/wiki/Coefficient_of_determination http://statisticsbyjim.com/regression/standard-error-regression-vs-r-squared/

        Parameters

        • actuals: number[] = []

          numerical samples

        • estimates: number[] = []

          estimates values

        Returns number

        r^2

  • forecastErrors: (actuals: number[], estimates: number[]) => number[]
      • (actuals: number[], estimates: number[]): number[]
      • The errors (residuals) from acutals and estimates

        memberof

        util

        example

        const actuals = [ 45, 38, 43, 39 ]; const estimates = [ 41, 43, 41, 42 ]; const errors = ms.util.forecastErrors(actuals, estimates); // => [ 4, -5, 2, -3 ]

        Parameters

        • actuals: number[]

          numerical samples

        • estimates: number[]

          estimates values

        Returns number[]

        errors (residuals)

  • getSafePropertyName: (name: string) => string
      • (name: string): string
      • returns a safe column name / url slug from a string

        Parameters

        • name: string

        Returns string

  • max: ArraySort
  • mean: ArrayCalculation
  • meanAbsoluteDeviation: (actuals: number[], estimates: number[]) => number
  • meanAbsolutePercentageError: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
      • MAPE (Mean Absolute Percent Error) measures the size of the error in percentage terms

        memberof

        util

        see

        https://www.forecastpro.com/Trends/forecasting101August2011.html

        example

        const actuals = [ 45, 38, 43, 39 ]; const estimates = [ 41, 43, 41, 42 ]; const MAPE = ms.util.meanAbsolutePercentageError(actuals, estimates); MAPE.toFixed(2) // => 0.86

        Parameters

        • actuals: number[]

          numerical samples

        • estimates: number[]

          estimates values

        Returns number

        MAPE

  • meanForecastError: (actuals: number[], estimates: number[]) => number
  • meanSquaredError: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
      • The standard error of the estimate is a measure of the accuracy of predictions made with a regression line. Compares the estimate to the actual value

        memberof

        util

        see

        http://onlinestatbook.com/2/regression/accuracy.html

        example

        const actuals = [ 45, 38, 43, 39 ]; const estimates = [ 41, 43, 41, 42 ]; const MSE = ms.util.meanSquaredError(actuals, estimates); // => 13.5

        Parameters

        • actuals: number[]

          numerical samples

        • estimates: number[]

          estimates values

        Returns number

        MSE

  • min: ArraySort
  • pivotArrays: (arrays?: Matrix) => Matrix
      • (arrays?: Matrix): Matrix
      • returns a matrix of values by combining arrays into a matrix

        memberof

        util

        example

        const arrays = [ [ 1, 1, 3, 3 ], [ 2, 2, 3, 3 ], [ 3, 3, 4, 3 ], ]; pivotArrays(arrays); //=> // [ // [1, 2, 3,], // [1, 2, 3,], // [3, 3, 4,], // [3, 3, 3,], // ];

        Parameters

        • arrays: Matrix = []

        Returns Matrix

        a matrix of column values

  • pivotVector: (vectors?: any[][]) => Matrix
      • (vectors?: any[][]): Matrix
      • returns an array of vectors as an array of arrays

        example

        const vectors = [ [1,2,3], [1,2,3], [3,3,4], [3,3,3] ]; const arrays = pivotVector(vectors); // => [ [1,2,3,3], [2,2,3,3], [3,3,4,3] ];

        memberof

        util

        Parameters

        • vectors: any[][] = []

        Returns Matrix

  • r: (actuals?: number[], estimates?: number[]) => number
      • (actuals?: number[], estimates?: number[]): number
      • The coefficent of Correlation is given by R decides how well the given data fits a line or a curve.

        example

        const actuals = [ 39, 42, 67, 76, ]; const estimates = [ 44, 40, 60, 84, ]; const R = ms.util.coefficientOfCorrelation(actuals, estimates); R.toFixed(4) // => 0.9408

        memberof

        util

        see

        https://calculator.tutorvista.com/r-squared-calculator.html

        Parameters

        • actuals: number[] = []

          numerical samples

        • estimates: number[] = []

          estimates values

        Returns number

        R

  • rBarSquared: (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }) => number
      • (options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }): number
      • You can use the adjusted coefficient of determination to determine how well a multiple regression equation “fits” the sample data. The adjusted coefficient of determination is closely related to the coefficient of determination (also known as R2) that you use to test the results of a simple regression equation.

        example

        const adjr2 = ms.util.adjustedCoefficentOfDetermination({ rSquared: 0.944346527, sampleSize: 8, independentVariables: 2, }); r2.toFixed(3) // => 0.922

        memberof

        util

        see

        http://www.dummies.com/education/math/business-statistics/how-to-calculate-the-adjusted-coefficient-of-determination/

        Parameters

        • options: { actuals: number[]; estimates: number[]; independentVariables: number; rSquared: number; sampleSize: number }
          • actuals: number[]
          • estimates: number[]
          • independentVariables: number

            the number of independent variables in the regression equation

          • rSquared: number
          • sampleSize: number

        Returns number

        adjusted r^2 for multiple linear regression

  • rSquared: (actuals?: number[], estimates?: number[]) => number
      • (actuals?: number[], estimates?: number[]): number
      • The coefficent of determination is given by r^2 decides how well the given data fits a line or a curve.

        Parameters

        • actuals: number[] = []
        • estimates: number[] = []

        Returns number

        r^2

  • range: { (start: number, end?: number, step?: number): number[]; (end: number, index: string | number, guard: object): number[] }
      • (start: number, end?: number, step?: number): number[]
      • (end: number, index: string | number, guard: object): number[]
      • Creates an array of numbers (positive and/or negative) progressing from start up to, but not including, end. If end is not specified it’s set to start with start then set to 0. If end is less than start a zero-length range is created unless a negative step is specified.

        Parameters

        • start: number

          The start of the range.

        • Optional end: number

          The end of the range.

        • Optional step: number

          The value to increment or decrement by.

        Returns number[]

        Returns a new range array.

      • see

        _.range

        Parameters

        • end: number
        • index: string | number
        • guard: object

        Returns number[]

  • rangeRight: { (start: number, end?: number, step?: number): number[]; (end: number, index: string | number, guard: object): number[] }
      • (start: number, end?: number, step?: number): number[]
      • (end: number, index: string | number, guard: object): number[]
      • This method is like _.range except that it populates values in descending order.

        category

        Util

        example

        _.rangeRight(4); // => [3, 2, 1, 0]

        _.rangeRight(-4); // => [-3, -2, -1, 0]

        _.rangeRight(1, 5); // => [4, 3, 2, 1]

        _.rangeRight(0, 20, 5); // => [15, 10, 5, 0]

        _.rangeRight(0, -4, -1); // => [-3, -2, -1, 0]

        _.rangeRight(1, 4, 0); // => [1, 1, 1]

        _.rangeRight(0); // => []

        Parameters

        • start: number

          The start of the range.

        • Optional end: number

          The end of the range.

        • Optional step: number

          The value to increment or decrement by.

        Returns number[]

        Returns the new array of numbers.

      • see

        _.rangeRight

        Parameters

        • end: number
        • index: string | number
        • guard: object

        Returns number[]

  • scale: (a: number[], d: number) => number[]
      • (a: number[], d: number): number[]
      • Parameters

        • a: number[]
        • d: number

        Returns number[]

  • sd: ArrayCalculation
  • squaredDifference: (left: number[], right: number[]) => number[]
      • (left: number[], right: number[]): number[]
      • Returns an array of the squared different of two arrays

        memberof

        util

        Parameters

        • left: number[]
        • right: number[]

        Returns number[]

        Squared difference of left minus right array

  • standardError: (actuals?: number[], estimates?: number[]) => number
      • (actuals?: number[], estimates?: number[]): number
      • The standard error of the estimate is a measure of the accuracy of predictions made with a regression line. Compares the estimate to the actual value

        memberof

        util

        see

        http://onlinestatbook.com/2/regression/accuracy.html

        example

        const actuals = [ 2, 4, 5, 4, 5, ]; const estimates = [ 2.8, 3.4, 4, 4.6, 5.2, ]; const SE = ms.util.standardError(actuals, estimates); SE.toFixed(2) // => 0.89

        Parameters

        • actuals: number[] = []

          numerical samples

        • estimates: number[] = []

          estimates values

        Returns number

        Standard Error of the Estimate

  • standardScore: (observations?: number[]) => number[]
  • sum: ArrayCalculation
  • trackingSignal: (actuals: number[], estimates: number[]) => number
      • (actuals: number[], estimates: number[]): number
  • zScore: (observations?: number[]) => number[]