Title: | Kernel Distance Metric Learning for Mixed-Type Data |
---|---|
Description: | Distance metrics for mixed-type data consisting of continuous, nominal, and ordinal variables. This methodology uses additive and product kernels to calculate similarity functions and metrics, and selects variables relevant to the underlying distance through bandwidth selection via maximum similarity cross-validation. These methods can be used in any distance-based algorithm, such as distance-based clustering. For further details, we refer the reader to Ghashti and Thompson (2024) <<doi:10.48550/arXiv.2306.01890>> for dkps() methodology, and Ghashti (2024) <doi:10.14288/1.0443975> for dkss() methodology. |
Authors: | John R. J. Thompson [aut, cre] , Jesse S. Ghashti [aut] |
Maintainer: | John R. J. Thompson <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.1.0 |
Built: | 2024-11-21 03:49:54 UTC |
Source: | https://github.com/cran/kdml |
This function generates a mixed-type data frame with a combination of continuous
(numeric
), nominal (factor
), and ordinal (ordered
)
variables with prespecified cluster overlap for each variable type.
confactord
allows the user to specify the number of each
variable type, the amount of variables per variable type that have cluster
overlap, the amount of cluster overlap for each variable type, the number of
levels for the nominal and ordinal variables, and proportion of observations
per class membership. Within and across-type variables are generated
independently from one another. Currently, only two classes are may be generated.
confactord(n = 200, popProb = c(0.5,0.5), numMixVar = c(1,1,1), numMixVarOl = c(1,1,1), olVarType = c(0.1,0.1,0.1), catLevels = c(2,4))
confactord(n = 200, popProb = c(0.5,0.5), numMixVar = c(1,1,1), numMixVarOl = c(1,1,1), olVarType = c(0.1,0.1,0.1), catLevels = c(2,4))
n |
integer number of observations to be generated. Defaults to |
popProb |
numeric vector of length two specifying the proportion of observations
allocated to each class membership, which must sum to one. Defaults to
|
numMixVar |
numeric vector of integers of length three specifying (in order) the total
number of continuous (numeric), nominal (factor), and ordinal (ordered)
variables to be generated. If a specific variable type is not required,
set the appropriate vector indice to zero. Defaults to
|
numMixVarOl |
numeric vector of integers of length three specifying (in order) the total
number of continuous (numeric), nominal (factor), and ordinal (ordered)
variables that will have class membership overlap. If all variables are to
be well-separated by class membership, set all indices to zero. No indice
of this vector may be greater than the corresponding indice in
|
olVarType |
numeric vector of length three specifying (in order) the percentage of class
membership overlap to be applied to the continuous (numeric), nominal
(factor), and ordinal (ordered) No argument required if
|
catLevels |
numeric vector of length two specifying (in order) the number of levels
(integer values) for each of the nominal (factor) and ordinal (ordered)
variable types. Defaults to |
Continuous variables are generated independently from normal distributions, with means determined by true class membership. If overlap is specified, additional variance is introduced to simulate cluster overlap. Nominal variables are generated using Dirichlet distributions representing different population proportions. Ordinal variables are initially simulated as continuous variables and then discretized into ordered categories based on quantile distributions, similar to a latent class model where ordinal categories are inferred based on underlying continuous distributions and adjusted for cluster overlap parameters.
confactord
returns a list
object, with the
following components:
data |
a |
class |
a numeric vector of integers specifying the true class memberships
for the returned |
John R. J. Thompson [email protected], Jesse S. Ghashti [email protected]
mscv.dkss
, mscv.dkps
, dkss
,
dkps
# EXAMPLE1: Default implementation generates the following # 200 observations split into two clusters of equal size (100 observations each) # Three variables-- one of each numeric, factor, and ordered # Each variable has ten percent cluster overlap # Nominal variable is binary # Ordinal variable has four levels df1 <- confactord() # EXAMPLE2: # 500 observations; 100 observations in cluster one and 400 in cluster two # Three continuous variables, two nominal, one ordinal # Only one continuous variable has cluster overlap # All nominal and ordinal variables have cluster overlap # Cluster overlap for continuous variable is twenty percent # Cluster overlap for nominal variables are thirty percent # Cluster overlap for ordinal variable is fourty percent # Nominal variable has three levels, while ordinal has 5 df2 <- confactord(n = 500, popProb = c(0.2,0.8), numMixVar = c(3,2,1), numMixVarOl = c(1,2,1), olVarType = c(0.2,0.3,0.4), catLevels = c(3,5))
# EXAMPLE1: Default implementation generates the following # 200 observations split into two clusters of equal size (100 observations each) # Three variables-- one of each numeric, factor, and ordered # Each variable has ten percent cluster overlap # Nominal variable is binary # Ordinal variable has four levels df1 <- confactord() # EXAMPLE2: # 500 observations; 100 observations in cluster one and 400 in cluster two # Three continuous variables, two nominal, one ordinal # Only one continuous variable has cluster overlap # All nominal and ordinal variables have cluster overlap # Cluster overlap for continuous variable is twenty percent # Cluster overlap for nominal variables are thirty percent # Cluster overlap for ordinal variable is fourty percent # Nominal variable has three levels, while ordinal has 5 df2 <- confactord(n = 500, popProb = c(0.2,0.8), numMixVar = c(3,2,1), numMixVarOl = c(1,2,1), olVarType = c(0.2,0.3,0.4), catLevels = c(3,5))
This function calculates the pairwise distances between mixed-type observations consisting of numeric (continuous), factor (nominal), and ordered factor (ordinal) variables using the method described in Ghashti, J. S. and Thompson, J. R. J (2023). This kernel metric learning methodology learns the bandwidths associated with each kernel function for each variable type and returns a distance matrix that can be utilized in any distance-based clustering algorithm.
dkps(df, bw = "mscv", cFUN = "c_gaussian", uFUN = "u_aitken", oFUN = "o_wangvanryzin", stan = TRUE, verbose = FALSE)
dkps(df, bw = "mscv", cFUN = "c_gaussian", uFUN = "u_aitken", oFUN = "o_wangvanryzin", stan = TRUE, verbose = FALSE)
df |
a |
bw |
a bandwidth specification method. This can be set as a vector of |
cFUN |
character string specifying the continuous kernel function. Options include
|
uFUN |
character string specifying the nominal kernel function for unordered
factors. Options include |
oFUN |
character string specifying the ordinal kernel function for ordered factors.
Options include |
stan |
a logical value which specifies whether to scale the resulting distance
matrix between 0 and 1 using min-max normalization. If set to |
verbose |
a logical value which specifies whether to print procedural steps to the
console. If set to |
dkps
implements the distance using kernel product similarity (DKPS) as
described by Ghashti and Thompson (2023). This approach uses product kernels
for continuous variables, and summation kernels for nominal and ordinal data,
which are then summed over all variable types to return the pairwise distance
between mixed-type data.
Each kernel requires a bandwidth specification, which can either be a user
defined numeric vector of length from alternative methodologies for
bandwidth selection, or through two bandwidth specification methods. The
mscv
bandwidth selection routine is based on the maximum-similarity
cross-validation routine by Ghashti and Thompson (2023), invoked by the
function mscv.dkps
. The np
bandwidth selection routine
follows maximum-likelihood cross-validation techniques described by Li and
Racine (2007) and Li and Racine (2003) for kernel density estimation of
mixed-type data. Bandwidths will differ for each variable.
Data contained in the data frame df
may constitute any combinations of
continuous, nominal, or ordinal data, which is to be specified in the data
frame df
using factor
for nominal data, and ordered
for ordinal data. Data can be entered in an arbitrary order and data types
will be detected automatically. User-inputted vectors of bandwidths bw
must be defined in the same order as the variables in the data frame df
,
as to ensure they sorted accordingly by the routine.
The are many kernels which can be specified by the user. The majority of the continuous kernel functions may be found in Cameron and Trivedi (2005), Härdle et al. (2004) or Silverman (1986). Nominal kernels use a variation on Aitchison and Aitken's (1976) kernel, while ordinal kernels use a variation of the Wang and van Ryzin (1981) kernel. Both nominal and ordinal kernel functions can be found in Li and Racine (2007), Li and Racine (2003), Ouyan et al. (2006), and Titterington and Bowman (1985).
dkps
returns a list
object, with the
following components:
distances |
an |
bandwidths |
a |
John R. J. Thompson [email protected], Jesse S. Ghashti [email protected]
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method”, Biometrika, 63, 413-420.
Cameron, A. and P. Trivedi (2005), “Microeconometrics: Methods and Applications”, Cambridge University Press.
Ghashti, J.S. and J.R.J Thompson (2023), “Mixed-type Distance Shrinkage and Selection for Clustering via Kernel Metric Learning”, arXiv preprint arXiv:2306.01890.
Härdle, W., and M. Müller and S. Sperlich and A. Werwatz (2004), “Nonparametric and Semiparametric Models”, (Vol. 1). Berlin: Springer.
Li, Q. and J.S. Racine (2007), “Nonparametric Econometrics: Theory and Practice”, Princeton University Press.
Li, Q. and J.S. Racine (2003), “Nonparametric estimation of distributions with categorical and continuous data”, Journal of Multivariate Analysis, 86, 266-292.
Ouyang, D. and Q. Li and J.S. Racine (2006), “Cross-validation and the estimation of probability distributions with categorical data”, Journal of Nonparametric Statistics, 18, 69-100.
Silverman, B.W. (1986), “Density Estimation”, London: Chapman and Hall.
Titterington, D.M. and A.W. Bowman (1985), “A comparative study of smoothing procedures for ordered categorical data”, Journal of Statistical Computation and Simulation, 21(3-4), 291-312.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions”, Biometrika, 68, 301-309.
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, and will automatically be # defaulted to the mscv bandwidth specification technique and default kernel # function d1 <- dkps(df = df) # d$bandwidths to see the mscv obtained bandwidths # d$distances to see the distance matrix # try using the np package, which has few continuous and ordinal kernels to # choose from. Recommended using default kernel functions d2 <- dkps(df = df, bw = "np") # precomputed bandwidth example # note that continuous variables requires bandwidths > 0 # ordinal variables requires bandwidths in [0,1] # for nominal variables, u_aitken requires bandwidths in [0,1] # and u_aitchisonaitken in [0,(c-1)/c] # where c is the number of unique values in the i-th column of df. # any bandwidths outside this range will result in a warning message bw_vec <- c(1.0, 0.5, 0.5, 5.0, 0.3, 0.3) d3 <- dkps(df = df, bw = bw_vec) # user-specific kernel functions example d5 <- dkps(df = df, bw = "mscv", cFUN = "c_epanechnikov", uFUN = "u_aitken", oFUN = "o_habbema")
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, and will automatically be # defaulted to the mscv bandwidth specification technique and default kernel # function d1 <- dkps(df = df) # d$bandwidths to see the mscv obtained bandwidths # d$distances to see the distance matrix # try using the np package, which has few continuous and ordinal kernels to # choose from. Recommended using default kernel functions d2 <- dkps(df = df, bw = "np") # precomputed bandwidth example # note that continuous variables requires bandwidths > 0 # ordinal variables requires bandwidths in [0,1] # for nominal variables, u_aitken requires bandwidths in [0,1] # and u_aitchisonaitken in [0,(c-1)/c] # where c is the number of unique values in the i-th column of df. # any bandwidths outside this range will result in a warning message bw_vec <- c(1.0, 0.5, 0.5, 5.0, 0.3, 0.3) d3 <- dkps(df = df, bw = bw_vec) # user-specific kernel functions example d5 <- dkps(df = df, bw = "mscv", cFUN = "c_epanechnikov", uFUN = "u_aitken", oFUN = "o_habbema")
This function calculates the pairwise distances between mixed-type observations
consisting of continuous (numeric
), nominal (factor
), and ordinal
(ordered
) variables using the method described in Ghashti (2024).
This kernel metric learning methodology calculates a kernel sum similarity
function, with a variety of options for kernel functions associated with each
variable type and returns a distance matrix that can be used in any distance-
based algorithm.
dkss(df, bw = "mscv", cFUN = "c_gaussian", uFUN = "u_aitken", oFUN = "o_wangvanryzin", stan = TRUE, verbose = FALSE)
dkss(df, bw = "mscv", cFUN = "c_gaussian", uFUN = "u_aitken", oFUN = "o_wangvanryzin", stan = TRUE, verbose = FALSE)
df |
a |
bw |
numeric bandwidth vector of length |
cFUN |
character value specifying the continuous kernel function. Options include
|
uFUN |
character value specifying the nominal kernel function for unordered
factors. Options include |
oFUN |
character value specifying the ordinal kernel function for ordered factors.
Options include |
stan |
a logical value which specifies whether to scale the resulting distance
matrix between 0 and 1 using min-max normalization. If set to |
verbose |
a logical value which specifies whether to print procedural steps to the
console. If set to |
dkss
implements the distance using summation similarity distance (DKSS)
as described by Ghashti (2024). This approach uses summation kernels for
continuous, nominal and ordinal data, which are then summed over all variable
types to return the pairwise distance between mixed-type data.
There are several kernels to select from. The continuous kernel functions may be found in Cameron and Trivedi (2005), Härdle et al. (2004) or Silverman (1986). Nominal kernels use a variation on Aitchison and Aitken's (1976) kernel, while ordinal kernels use a variation of the Wang and van Ryzin (1981) kernel. Both nominal and ordinal kernel functions can be found in Li and Racine (2007), Li and Racine (2003), Ouyan et al. (2006), and Titterington and Bowman (1985).
Each kernel requires a bandwidth specification, which can either be a user
defined numeric vector of length from alternative methodologies for
bandwidth selection, or through two bandwidth selection methods can be
specified. The
mscv
bandwidth selection is based on maximum similarity
cross-validation by Ghashti and Thompson (2024), invoked by the function
mscv.dkss
. The np
bandwidth selection follows the maximum
likelihood cross-validation method described by Li and Racine (2007) and Li
and Racine (2003) for kernel density estimation of mixed-type data.
Data contained in the data frame df
may constitute any combinations of
continuous, nominal, or ordinal data, which is to be specified in the data
frame df
using factor
for nominal data, and
ordered
for ordinal data. Data types can be in any order and
will be detected automatically. User-inputted vectors of
bandwidths bw
must be specified in the same order as the variables in
the data frame df
, as to ensure they sorted accordingly by the routine.
dkss
returns a list
object, with the
following components:
distances |
an |
bandwidths |
a |
John R. J. Thompson [email protected], Jesse S. Ghashti [email protected]
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method”, Biometrika, 63, 413-420.
Cameron, A. and P. Trivedi (2005), “Microeconometrics: Methods and Applications”, Cambridge University Press.
Ghashti, J.S. (2024), “Similarity Maximization and Shrinkage Approach in Kernel Metric Learning for Clustering Mixed-type Data (T)”, University of British Columbia.
Härdle, W., and M. Müller and S. Sperlich and A. Werwatz (2004), “Nonparametric and Semiparametric Models”, (Vol. 1). Berlin: Springer.
Li, Q. and J.S. Racine (2007), “Nonparametric Econometrics: Theory and Practice”, Princeton University Press.
Li, Q. and J.S. Racine (2003), “Nonparametric estimation of distributions with categorical and continuous data”, Journal of Multivariate Analysis, 86, 266-292.
Ouyang, D. and Q. Li and J.S. Racine (2006), “Cross-validation and the estimation of probability distributions with categorical data”, Journal of Nonparametric Statistics, 18, 69-100.
Silverman, B.W. (1986), “Density Estimation”, London: Chapman and Hall.
Titterington, D.M. and A.W. Bowman (1985), “A comparative study of smoothing procedures for ordered categorical data”, Journal of Statistical
Computation and Simulation, 21(3-4), 291-312.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions”, Biometrika, 68, 301-309.
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, and will automatically be # defaulted to the mscv bandwidth specification technique and default kernel # function d1 <- dkss(df = df) # d$bandwidths to see the mscv obtained bandwidths # d$distances to see the distance matrix # try using the np package, which has few continuous and ordinal kernels # to choose from. Recommended using default kernel functions d2 <- dkss(df = df, bw = "np") # precomputed bandwidth example # note that continuous variables requires bandwidths > 0 # ordinal variables requires bandwidths in [0,1] # for nominal variables, u_aitken requires bandwidths in [0,1] # and u_aitchisonaitken in [0,(c-1)/c] # where c is the number of unique values in the i-th column of df. # any bandwidths outside this range will result in a warning message bw_vec <- c(1.0, 0.5, 0.5, 5.0, 0.3, 0.3) d3 <- dkss(df = df, bw = bw_vec) # user-specific kernel functions example d5 <- dkss(df = df, bw = "mscv", cFUN = "c_epanechnikov", uFUN = "u_aitken", oFUN = "o_habbema")
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, and will automatically be # defaulted to the mscv bandwidth specification technique and default kernel # function d1 <- dkss(df = df) # d$bandwidths to see the mscv obtained bandwidths # d$distances to see the distance matrix # try using the np package, which has few continuous and ordinal kernels # to choose from. Recommended using default kernel functions d2 <- dkss(df = df, bw = "np") # precomputed bandwidth example # note that continuous variables requires bandwidths > 0 # ordinal variables requires bandwidths in [0,1] # for nominal variables, u_aitken requires bandwidths in [0,1] # and u_aitchisonaitken in [0,(c-1)/c] # where c is the number of unique values in the i-th column of df. # any bandwidths outside this range will result in a warning message bw_vec <- c(1.0, 0.5, 0.5, 5.0, 0.3, 0.3) d3 <- dkss(df = df, bw = bw_vec) # user-specific kernel functions example d5 <- dkss(df = df, bw = "mscv", cFUN = "c_epanechnikov", uFUN = "u_aitken", oFUN = "o_habbema")
This package contains nonparametric kernel methods for calculating pairwise distances between mixed-type observations. These methods can be used in any distance based algorithm, with emphasis placed on usage in clustering or classification applications. Descriptions of the implementation of these methods can be found in Ghashti (2024) and Ghashti and Thompson (2024).
This package contains two functions for pairwise distance calculations of mixed-type data based on two different methods. Kernel methods also require variable-specific bandwidths, with two additional functions for the bandwidth specification methods. Additionally, this package contains a function methods for mixed-type data generation.
John R.J. Thompson <[email protected]>, Jesse S. Ghashti <[email protected]>
Maintainer: John R.J. Thompson <[email protected]>
We would like to acknowledge funding support from the University of British Columbia Aspire Fund (UBC:www.ok.ubc.ca/). We also acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
Ghashti, J.S. (2024), “Similarity Maximization and Shrinkage Approach in Kernel Metric Learning for Clustering Mixed-type Data (T)”, University of British Columbia. <https://dx.doi.org/10.14288/1.044397>
Ghashti, J.S. and J.R.J Thompson (2024), “Mixed-type Distance Shrinkage and Selection for Clustering via Kernel Metric Learning”. Journal of Classification, Accepted.
This function calculates the pairwise similarities between mixed-type
observations consisting of continuous (numeric
), nominal
(factor
), and ordinal (ordered
) variables using the method
described in Ghashti (2024). This kernel similarity learning methodology
calculates a kernel sum similarity function, with a variety of options for
kernel functions associated with each variable type and returns a distance
matrix that can be used in any distance-based algorithm.
kss(df, bw = "np", npmethod = NULL, cFUN = "c_gaussian", uFUN = "u_aitken", oFUN = "o_wangvanryzin", nstart = NULL, stan = TRUE, verbose = FALSE)
kss(df, bw = "np", npmethod = NULL, cFUN = "c_gaussian", uFUN = "u_aitken", oFUN = "o_wangvanryzin", nstart = NULL, stan = TRUE, verbose = FALSE)
df |
a |
bw |
numeric bandwidth vector of length |
npmethod |
character value specifying the |
cFUN |
character value specifying the continuous kernel function. Options include
|
uFUN |
character value specifying the nominal kernel function for unordered
factors. Options include |
oFUN |
character value specifying the ordinal kernel function for ordered factors.
Options include |
nstart |
integer value specifying the number of random starts for the |
stan |
a logical value which specifies whether to scale the resulting distance
matrix between 0 and 1 using min-max normalization. If set to |
verbose |
a logical value which specifies whether to print procedural steps to the
console. If set to |
kss
implements the kernel summation similarity function (KSS)
as described by Ghashti (2024). This approach uses summation kernels for
continuous, nominal and ordinal data, which are then summed over all variable
types to return the pairwise similarities between mixed-type data.
There are several kernels to select from. The continuous kernel functions may be found in Cameron and Trivedi (2005), Härdle et al. (2004) or Silverman (1986). Nominal kernels use a variation on Aitchison and Aitken's (1976) kernel, while ordinal kernels use a variation of the Wang and van Ryzin (1981) kernel. Both nominal and ordinal kernel functions can be found in Li and Racine (2007), Li and Racine (2003), Ouyan et al. (2006), and Titterington and Bowman (1985).
Each kernel requires a bandwidth specification, which can either be a user
defined numeric vector of length from alternative methodologies for
bandwidth selection, or through one bandwidth selection method can be
specified. The
np
bandwidth selection methods follow three techniques
(cv.ml
, cv.ls
and normal-reference
) described by Li and
Racine (2007) and Li and Racine (2003) for kernel density estimation of
mixed-type data.
Data contained in the data frame df
may constitute any combinations of
continuous, nominal, or ordinal data, which is to be specified in the data
frame df
using factor
for nominal data, and
ordered
for ordinal data. Data types can be in any order and
will be detected automatically. User-inputted vectors of
bandwidths bw
must be specified in the same order as the variables in
the data frame df
, as to ensure they sorted accordingly by the routine.
kss
returns a list
object, with the
following components:
similarities |
an |
bandwidths |
a |
John R. J. Thompson [email protected], Jesse S. Ghashti [email protected]
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method”, Biometrika, 63, 413-420.
Cameron, A. and P. Trivedi (2005), “Microeconometrics: Methods and Applications”, Cambridge University Press.
Ghashti, J.S. (2024), “Similarity Maximization and Shrinkage Approach in Kernel Metric Learning for Clustering Mixed-type Data (T)”, University of British Columbia.
Härdle, W., and M. Müller and S. Sperlich and A. Werwatz (2004), “Nonparametric and Semiparametric Models”, (Vol. 1). Berlin: Springer.
Li, Q. and J.S. Racine (2007), “Nonparametric Econometrics: Theory and Practice”, Princeton University Press.
Li, Q. and J.S. Racine (2003), “Nonparametric estimation of distributions with categorical and continuous data”, Journal of Multivariate Analysis, 86, 266-292.
Ouyang, D. and Q. Li and J.S. Racine (2006), “Cross-validation and the estimation of probability distributions with categorical data”, Journal of Nonparametric Statistics, 18, 69-100.
Silverman, B.W. (1986), “Density Estimation”, London: Chapman and Hall.
Titterington, D.M. and A.W. Bowman (1985), “A comparative study of smoothing procedures for ordered categorical data”, Journal of Statistical Computation and Simulation, 21(3-4), 291-312.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions”, Biometrika, 68, 301-309.
mscv.dkps
, dkps
, mscv.dkss
, dkss
, link{spectral.clust}
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, and will automatically be # defaulted to the mscv bandwidth specification technique and default kernel # function s1 <- kss(df = df) # s$bandwidths to see the mscv obtained bandwidths # s$similarities to see the similarity matrix # try using the np package, which has few continuous and ordinal kernels # to choose from. Recommended using default kernel functions s2 <- kss(df = df, bw = "np") #defaults to npmethod "cv.ml" # precomputed bandwidth example # note that continuous variables requires bandwidths > 0 # ordinal variables requires bandwidths in [0,1] # for nominal variables, u_aitken requires bandwidths in [0,1] # and u_aitchisonaitken in [0,(c-1)/c] # where c is the number of unique values in the i-th column of df. # any bandwidths outside this range will result in a warning message bw_vec <- c(1.0, 0.5, 0.5, 5.0, 0.3, 0.3) s3 <- kss(df = df, bw = bw_vec) # user-specific kernel functions example with "cv.ls" from np. s4 <- kss(df = df, bw = "np", npmethod = "cv.ls", cFUN = "c_epanechnikov", uFUN = "u_aitken", oFUN = "o_wangvanryzin")
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, and will automatically be # defaulted to the mscv bandwidth specification technique and default kernel # function s1 <- kss(df = df) # s$bandwidths to see the mscv obtained bandwidths # s$similarities to see the similarity matrix # try using the np package, which has few continuous and ordinal kernels # to choose from. Recommended using default kernel functions s2 <- kss(df = df, bw = "np") #defaults to npmethod "cv.ml" # precomputed bandwidth example # note that continuous variables requires bandwidths > 0 # ordinal variables requires bandwidths in [0,1] # for nominal variables, u_aitken requires bandwidths in [0,1] # and u_aitchisonaitken in [0,(c-1)/c] # where c is the number of unique values in the i-th column of df. # any bandwidths outside this range will result in a warning message bw_vec <- c(1.0, 0.5, 0.5, 5.0, 0.3, 0.3) s3 <- kss(df = df, bw = bw_vec) # user-specific kernel functions example with "cv.ls" from np. s4 <- kss(df = df, bw = "np", npmethod = "cv.ls", cFUN = "c_epanechnikov", uFUN = "u_aitken", oFUN = "o_wangvanryzin")
This function calculates maximum-similarity cross-validated bandwidths for the
distance using kernel summation similarity. This implementation uses the method
described in Ghashti and Thompson (2023) for mixed-type data that includes any
of numeric (continuous), factor (nominal), and ordered factor (ordinal)
variables. mscv.dkps
calculates the bandwidths associated with each
kernel function for variable types and returns a numeric vector of bandwidths
that can be used with the dkps
pairwise distance calculation.
mscv.dkps(df, nstart = NULL, ckernel = "c_gaussian", ukernel = "u_aitken", okernel = "o_wangvanryzin", verbose = FALSE)
mscv.dkps(df, nstart = NULL, ckernel = "c_gaussian", ukernel = "u_aitken", okernel = "o_wangvanryzin", verbose = FALSE)
df |
a |
nstart |
integer number of restarts for the process of finding extrema of the mscv
function from random initial bandwidth parameters (starting points). If the
default of |
ckernel |
character string specifying the continuous kernel function. Options include
|
ukernel |
character string specifying the nominal kernel function for unordered factors.
Options include |
okernel |
character string specifying the ordinal kernel function for ordered factors.
Options include |
verbose |
a logical value which specifies whether to output the |
mscv.dkps
implements the maximum-similarity cross-validation (MSCV)
technique for bandwidth selection pertaining to the dkps
function,
as described by Ghashti and Thompson (2023). This approach uses product kernels
for continuous variables, and summation kernels for nominal and ordinal data,
which are then summed over all variable types to return the pairwise distance
between mixed-type data.
The maximization procedure for bandwidth selection is based on the objective
where
,
, and
are the continuous,
nominal, and ordinal kernel functions, repectively, with
's
representing kernel specifical bandwiths for the
-th variable, and
,
,
the number of continuous, nominal, and ordinal
variables in the data frame
df
. The resulting bw
vector returned
is the bandwidths that yield the highest objective function value.
Data contained in the data frame df
may constitute any combinations of
continuous, nominal, or ordinal data, which is to be specified in the data
frame df
using numeric
for continuous data, factor
for nominal data, and ordered
for ordinal data. Data can be
entered in an arbitrary order and data types will be detected automatically.
User-inputted vectors of bandwidths bw
must be defined in the same
order as the variables in the data frame df
, as to ensure they sorted
accordingly by the routine.
The are many kernels which can be specified by the user. Continuous kernel functions may be found in Cameron and Trivedi (2005), Härdle et al. (2004) or Silverman (1986). Nominal kernels use a variation on Aitchison and Aitken's (1976) kernel. Ordinal kernels use a variation of the Wang and van Ryzin (1981) kernel. All nominal and ordinal kernel functions can be found in Li and Racine (2007), Li and Racine (2003), Ouyan et al. (2006), and Titterington and Bowman (1985).
mscv.dkps
returns a list
object, with the
following components:
bw |
a |
fn_value |
a numeric value of the MSCV objective function, obtained using
the |
John R. J. Thompson [email protected], Jesse S. Ghashti [email protected]
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method”, Biometrika, 63, 413-420.
Cameron, A. and P. Trivedi (2005), “Microeconometrics: Methods and Applications”, Cambridge University Press.
Ghashti, J.S. and J.R.J Thompson (2023), “Mixed-type Distance Shrinkage and Selection for Clustering via Kernel Metric Learning. Journal of Classification, Accepted.”
Härdle, W., and M. Müller and S. Sperlich and A. Werwatz (2004), “Nonparametric and Semiparametric Models”, (Vol. 1). Berlin: Springer.
Li, Q. and J.S. Racine (2007), “Nonparametric Econometrics: Theory and Practice”, Princeton University Press.
Li, Q. and J.S. Racine (2003), “Nonparametric estimation of distributions with categorical and continuous data”, Journal of Multivariate Analysis, 86, 266-292.
Ouyang, D. and Q. Li and J.S. Racine (2006), “Cross-validation and the estimation of probability distributions with categorical data”, Journal of Nonparametric Statistics, 18, 69-100.
Silverman, B.W. (1986), “Density Estimation”, London: Chapman and Hall.
Titterington, D.M. and A.W. Bowman (1985), “A comparative study of smoothing procedures for ordered categorical data”, Journal of Statistical Computation and Simulation, 21(3-4), 291-312.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions”, Biometrika, 68, 301-309.
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, with defaults bw <- mscv.dkps(df = df) # specify number of starts and kernel functions bw2 <- mscv.dkps(df = df, nstart = 5, ckernel = "c_triangle", ukernel = "u_aitken", okernel = "o_liracine")
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, with defaults bw <- mscv.dkps(df = df) # specify number of starts and kernel functions bw2 <- mscv.dkps(df = df, nstart = 5, ckernel = "c_triangle", ukernel = "u_aitken", okernel = "o_liracine")
This function calculates maximum-similarity cross-validated bandwidths for the
distance using kernel summation similarity. This implementation uses the method
described in Ghashti (2024) for mixed-type data that includes any of numeric
(continuous), factor (nominal), and ordered factor (ordinal) variables.
mscv.dkss
calculates the bandwidths associated with each kernel function
for variable types and returns a numeric vector of bandwidths that can be used
with the dkss
pairwise distance calculation.
mscv.dkss(df, nstart = NULL, ckernel = "c_gaussian", ukernel = "u_aitken", okernel = "o_wangvanryzin", verbose = FALSE)
mscv.dkss(df, nstart = NULL, ckernel = "c_gaussian", ukernel = "u_aitken", okernel = "o_wangvanryzin", verbose = FALSE)
df |
a |
nstart |
integer number of restarts for the process of finding extrema of the mscv
function from random initial bandwidth parameters (starting points). If the
default of |
ckernel |
character string specifying the continuous kernel function. Options include
|
ukernel |
character string specifying the nominal kernel function for unordered factors.
Options include |
okernel |
character string specifying the ordinal kernel function for ordered factors.
Options include |
verbose |
a logical value which specifies whether to output the |
mscv.dkss
implements the maximum-similarity cross-validation (MSCV)
bandwidth selection technique for the dkss
function, described
by Ghashti (2024). This approach uses summation kernels for continuous,
nominal and ordinal data, which are then summed over all variable types to
return the pairwise distance between mixed-type data.
The maximization procedure for bandwidth selection is based on the objective
where
,
, and
are the continuous,
nominal, and ordinal kernel functions, repectively, with
's
representing kernel specifical bandwiths for the
-th variable, and
,
,
the number of continuous, nominal, and ordinal
variables in the data frame
df
. The bw
vector returned is the
bandwidths that yield the highest objective function value.
Data contained in the data frame df
may constitute any combinations of
continuous, nominal, or ordinal data, which is to be specified in the data
frame df
using numeric
for continuous data,
factor
for nominal data, and ordered
for ordinal
data. Data can be entered in an arbitrary order and data types will be
detected automatically. User-inputted vectors of bandwidths bw
must be
defined in the same order as the variables in the data frame df
, as to
ensure they sorted accordingly by the routine.
The are many kernels which can be specified by the user. Continuous kernel functions may be found in Cameron and Trivedi (2005), Härdle et al. (2004) or Silverman (1986). Nominal kernels use a variation of Aitchison and Aitken's (1976) kernel. Ordinal kernels use a variation of the Wang and van Ryzin (1981) kernel. All nominal and ordinal kernel functions can be found in Li and Racine (2007), Li and Racine (2003), Ouyan et al. (2006), and Titterington and Bowman (1985).
mscv.dkss
returns a list
object, with the
following components:
bw |
a |
fn_value |
a numeric value of the MSCV objective function, obtained using
the |
John R. J. Thompson [email protected], Jesse S. Ghashti [email protected]
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method”, Biometrika, 63, 413-420.
Cameron, A. and P. Trivedi (2005), “Microeconometrics: Methods and Applications”, Cambridge University Press.
Ghashti, J.S. (2024), “Similarity Maximization and Shrinkage Approach in Kernel Metric Learning for Clustering Mixed-type Data”, University of British Columbia.
Härdle, W., and M. Müller and S. Sperlich and A. Werwatz (2004), “Nonparametric and Semiparametric Models”, (Vol. 1). Berlin: Springer.
Li, Q. and J.S. Racine (2007), “Nonparametric Econometrics: Theory and Practice”, Princeton University Press.
Li, Q. and J.S. Racine (2003), “Nonparametric estimation of distributions with categorical and continuous data”, Journal of Multivariate Analysis, 86, 266-292.
Ouyang, D. and Q. Li and J.S. Racine (2006), “Cross-validation and the estimation of probability distributions with categorical data”, Journal of Nonparametric Statistics, 18, 69-100.
Silverman, B.W. (1986), “Density Estimation”, London: Chapman and Hall.
Titterington, D.M. and A.W. Bowman (1985), “A comparative study of smoothing procedures for ordered categorical data”, Journal of Statistical Computation and Simulation, 21(3-4), 291-312.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions”, Biometrika, 68, 301-309.
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, with defaults bw <- mscv.dkss(df = df) # specify number of starts and kernel functions bw2 <- mscv.dkss(df = df, nstart = 5, ckernel = "c_triangle", ukernel = "u_aitken", okernel = "o_liracine")
# example data frame with mixed numeric, nominal, and ordinal data. levels = c("Low", "Medium", "High") df <- data.frame( x1 = runif(100, 0, 100), x2 = factor(sample(c("A", "B", "C"), 100, TRUE)), x3 = factor(sample(c("A", "B", "C"), 100, TRUE)), x4 = rnorm(100, 10, 3), x5 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels), x6 = ordered(sample(c("Low", "Medium", "High"), 100, TRUE), levels = levels)) # minimal implementation requires just the data frame, with defaults bw <- mscv.dkss(df = df) # specify number of starts and kernel functions bw2 <- mscv.dkss(df = df, nstart = 5, ckernel = "c_triangle", ukernel = "u_aitken", okernel = "o_liracine")
This function calculates performs spectral clustering with the k-means step using precomputed similarity or distance matrices, and returns a vector of cluster assignments.
spectral.clust(S, k, nstart = 10, iter.max = 1000, is.sim = NULL, neighbours = 10)
spectral.clust(S, k, nstart = 10, iter.max = 1000, is.sim = NULL, neighbours = 10)
S |
a |
k |
integer value specifying the number of clusters to form. This is passed to
the |
nstart |
integer value specifying the number of random starts for the bandwidth estimation. Defaults to 3 or the number of variables, whichever is larger. |
iter.max |
integer value specifying the maximum number of iterations for the
|
is.sim |
logical value indicating whether the input matrix |
neighbours |
integer value specifying the number of nearest neighbours to consider when
constructing the graph Laplacian. This helps in determining the structure
of the graph from the similarity or distance matrix. Defaults to |
spectral.clust
implements spectral clustering on pairwise similarity or
distance matrices, following the method described by Ng et al. (2001). The
function first constructs an adjacency matrix from the input similarity or
distance matrix S
using the neighbours
parameter to define the
nearest connections. If S
is a similarity matrix (is.sim = TRUE
),
the function retains the largest values corresponding to the neighbours
nearest observations. If S
is a distance matrix (is.sim = FALSE
),
it retains the smallest values for the nearest observations. The adjacency
matrix is symmetrized and used to compute the unnormalized Laplacian matrix.
The eigenvectors corresponding to the smallest eigenvalues of the Laplacian
are extracted and clustered using the kmeans
algorithm. The number of
clusters, k
, and parameters such as the number of random starts
(nstart
) and maximum iterations (iter.max
) for the kmeans
step are user-specified.
spectral.clust
returns a list
object with the following components:
clusters |
an |
S |
the original |
John R. J. Thompson [email protected], Jesse S. Ghashti [email protected]
Ng, A., Jordan, M., & Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. “Advances in Neural Information processing systems”, 14.
mscv.dkps
, dkps
, mscv.dkss
,
dkss
, link{kss}
# load the Iris dataset dat <- iris[,-5] # calculate pairwise similarities using maximum likelihood cross validation S <- kss(dat, bw = "np", npmethod = "cv.ml", cFUN = "c_gaussian", verbose = TRUE) # cluster points using spectral clustering and compare to true class labels cl <- spectral.clust(S$similarities, 3, is.sim = TRUE) table(cl$clusters, iris[,5]) # try a different number of neighbours cl2 <- spectral.clust(S$similarities, 3, is.sim = TRUE, neighbours = 4) table(cl2$clusters, iris[,5])
# load the Iris dataset dat <- iris[,-5] # calculate pairwise similarities using maximum likelihood cross validation S <- kss(dat, bw = "np", npmethod = "cv.ml", cFUN = "c_gaussian", verbose = TRUE) # cluster points using spectral clustering and compare to true class labels cl <- spectral.clust(S$similarities, 3, is.sim = TRUE) table(cl$clusters, iris[,5]) # try a different number of neighbours cl2 <- spectral.clust(S$similarities, 3, is.sim = TRUE, neighbours = 4) table(cl2$clusters, iris[,5])