I am broadly interested in the development and application of machine learning methods. In my current position as postdoctoral researcher at the Alan Turing Institute I focus on developing AI-based tools for data wrangling, in an effort to automate the tedious manual tasks of data preparation and data cleaning that often precede a machine learning analysis. I've worked on change point detection, data parsing, matrix factorization, multiclass SVMs, and sparse regression, among other things. Because my research is often focused on developing methods that work well in the real world, I have also created easy-to-use software packages for most of my research projects.
- Wrangling Messy CSV Files by Detecting Row and Type Patterns (HTML; PDF)
G. J. J. van den Burg and A. Nazabal and C. SuttonData Mining and Knowledge Discovery, 2019.▸ Show abstract Data scientists spend the majority of their time on preparing data for analysis. One of the first steps in this preparation phase is to load the data from the raw storage format. Comma-separated value (CSV) files are a popular format for tabular data due to their simplicity and ostensible ease of use. However, formatting standards for CSV files are not followed consistently, so each file requires manual inspection and potentially repair before the data can be loaded, an enormous waste of human effort for a task that should be one of the simplest parts of data science. The first and most essential step in retrieving data from CSV files is deciding on the dialect of the file, such as the cell delimiter and quote character. Existing dialect detection approaches are few and non-robust. In this paper, we propose a dialect detection method based on a novel measure of data consistency of parsed data files. Our method achieves 97% overall accuracy on a large corpus of real-world CSV files and improves the accuracy on messy CSV files by almost 22% compared to existing approaches, including those in the Python standard library. Our measure of data consistency is not specific to the data parsing problem, and has potential for more general applicability.
- GenSVM: A Generalized Multiclass Support Vector Machine (PDF)
G. J. J. van den Burg and P. J. F. GroenenJournal of Machine Learning Research, 17(224):1–42, 2016.▸ Show abstract Traditional extensions of the binary support vector machine (SVM) to multiclass problems are either heuristics or require solving a large dual optimization problem. Here, a generalized multiclass SVM is proposed called GenSVM. In this method classification boundaries for a K-class problem are constructed in a (K−1)-dimensional space using a simplex encoding. Additionally, several different weightings of the misclassification errors are incorporated in the loss function, such that it generalizes three existing multiclass SVMs through a single optimization problem. An iterative majorization algorithm is derived that solves the optimization problem without the need of a dual formulation. This algorithm has the advantage that it can use warm starts during cross validation and during a grid search, which significantly speeds up the training phase. Rigorous numerical experiments compare linear GenSVM with seven existing multiclass SVMs on both small and large data sets. These comparisons show that the proposed method is competitive with existing methods in both predictive accuracy and training time, and that it significantly outperforms several existing methods on these criteria.
- Probabilistic Sequential Matrix Factorization (PDF)
Ö. D. Akyildiz* and G. J. J. van den Burg* and T. Damoulas and M. J. F. SteelAccepted for publication at AISTATS, 2021.▸ Show abstract We introduce the probabilistic sequential matrix factorization (PSMF) method for factorizing time-varying and non-stationary datasets consisting of high-dimensional time-series. In particular, we consider nonlinear Gaussian state-space models where sequential approximate inference results in the factorization of a data matrix into a dictionary and time-varying coefficients with (possibly nonlinear) Markovian dependencies. The assumed Markovian structure on the coefficients enables us to encode temporal dependencies into a low-dimensional feature space. The proposed inference method is solely based on an approximate extended Kalman filtering scheme, which makes the resulting method particularly efficient. PSMF can account for temporal nonlinearities and, more importantly, can be used to calibrate and estimate generic differentiable nonlinear subspace models. We also introduce a robust version of PSMF, called rPSMF, which uses Student-t filters to handle model misspecification. We show that PSMF can be used in multiple contexts: modeling time series with a periodic subspace, robustifying changepoint detection methods, and imputing missing-data in high-dimensional time-series of air pollutants measured across London.
- On Memorization in Probabilistic Deep Generative Models (PDF)
G. J. J. van den Burg and C. K. I. WilliamsarXiv preprint 2106.03216, 2021.▸ Show abstract Recent advances in deep generative models have led to impressive results in a variety of application domains. Motivated by the possibility that deep learning models might memorize part of the input data, there have been increased efforts to understand how memorization can occur. In this work, we extend a recently proposed measure of memorization for supervised learning (Feldman, 2019) to the unsupervised density estimation problem and simplify the accompanying estimator. Next, we present an exploratory study that demonstrates how memorization can arise in probabilistic deep generative models, such as variational autoencoders. This reveals that the form of memorization to which these models are susceptible differs fundamentally from mode collapse and overfitting. Finally, we discuss several strategies that can be used to limit memorization in practice.
- An Evaluation of Change Point Detection Algorithms (PDF)
G. J. J. van den Burg and C. K. I. WilliamsarXiv preprint 2003.06222, 2020.▸ Show abstract Change point detection is an important part of time series analysis, as the presence of a change point indicates an abrupt and significant change in the data generating process. While many algorithms for change point detection exist, little attention has been paid to evaluating their performance on real-world time series. Algorithms are typically evaluated on simulated data and a small number of commonly-used series with unreliable ground truth. Clearly this does not provide sufficient insight into the comparative performance of these algorithms. Therefore, instead of developing yet another change point detection method, we consider it vastly more important to properly evaluate existing algorithms on real-world data. To achieve this, we present the first data set specifically designed for the evaluation of change point detection algorithms, consisting of 37 time series from various domains. Each time series was annotated by five expert human annotators to provide ground truth on the presence and location of change points. We analyze the consistency of the human annotators, and describe evaluation metrics that can be used to measure algorithm performance in the presence of multiple ground truth annotations. Subsequently, we present a benchmark study where 13 existing algorithms are evaluated on each of the time series in the data set. This study shows that binary segmentation (Scott and Knott, 1974) and Bayesian online change point detection (Adams and MacKay, 2007) are among the best performing methods. Our aim is that this data set will serve as a proving ground in the development of novel change point detection algorithms.
- Fast Meta-Learning for Adaptive Hierarchical Classifier Design (PDF)
G. J. J. van den Burg and A. O. HeroarXiv preprint 1711.03512, 2017.Code: Python▸ Show abstract We propose a new splitting criterion for a meta-learning approach to multiclass classifier design that adaptively merges the classes into a tree-structured hierarchy of increasingly difficult binary classification problems. The classification tree is constructed from empirical estimates of the Henze-Penrose bounds on the pairwise Bayes misclassification rates that rank the binary subproblems in terms of difficulty of classification. The proposed empirical estimates of the Bayes error rate are computed from the minimal spanning tree (MST) of the samples from each pair of classes. Moreover, a meta-learning technique is presented for quantifying the one-vs-rest Bayes error rate for each individual class from a single MST on the entire dataset. Extensive simulations on benchmark datasets show that the proposed hierarchical method can often be learned much faster than competing methods, while achieving competitive accuracy.
- SparseStep: Approximating the Counting Norm for Sparse Regularization (PDF)
G. J. J. van den Burg and P. J. F. Groenen and A. AlfonsarXiv preprint 1701.06967, 2017.Code: R▸ Show abstract The SparseStep algorithm is presented for the estimation of a sparse parameter vector in the linear regression problem. The algorithm works by adding an approximation of the exact counting norm as a constraint on the model parameters and iteratively strengthening this approximation to arrive at a sparse solution. Theoretical analysis of the penalty function shows that the estimator yields unbiased estimates of the parameter vector. An iterative majorization algorithm is derived which has a straightforward implementation reminiscent of ridge regression. In addition, the SparseStep algorithm is compared with similar methods through a rigorous simulation study which shows it often outperforms existing methods in both model fit and prediction accuracy.
- Algorithms for Multiclass Classification and Regularized Regression (PDF)
G. J. J. van den BurgErasmus University Rotterdam, 2018.▸ Show abstract
Multiclass classification and regularized regression problems are very common in modern statistical and machine learning applications. On the one hand, multiclass classification problems require the prediction of class labels: given observations of objects that belong to certain classes, can we predict to which class a new object belongs? On the other hand, the regularized regression problem is a variation of the common regression problem, which measures how changes in independent variables influence an observed outcome. In regularized regression, constraints are placed on the coefficients of the regression model to enforce certain properties in the solution, such as sparsity or limited size.
In this dissertation several new algorithms are presented for both multiclass classification and regularized regression problems. For multiclass classification the GenSVM method is presented. This method extends the binary support vector machine to multiclass classification problems in a way that is both flexible and general, while maintaining competitive performance and training time. In a different chapter, accurate estimates of the Bayes error are applied to both meta-learning and the construction of so-called classification hierarchies: structures in which a multiclass classification problem is decomposed into several binary classification problems.
For regularized regression problems a new algorithm is presented in two parts: first for the sparse regression problem and second as a general algorithm for regularized regression where the regularization function is a measure of the size of the coefficients. In the proposed algorithm graduated nonconvexity is used to slowly introduce the nonconvexity in the problem while iterating towards a solution. The empirical performance and theoretical convergence properties of the algorithm are analyzed with numerical experiments that demonstrate the ability for the algorithm to obtain globally optimal solutions.
I aim to make my research accessible by providing software packages for the methods I develop.
- CleverCSV. Implements the method from this paper. PyPI - GitHub.
- SmartSVM. Implements the SmartSVM classifier from this paper. PyPI - GitHub.
- SparseStep. Implements the SparseStep method from this paper. CRAN - GitHub.
- GenSVM. Implements the GenSVM method from this paper. PyPI - CRAN - GitHub.
- Abed. Tool for benchmarking ML methods on compute clusters. PyPI - GitHub.
- SyncRNG. The same random numbers in R and Python. CRAN - PyPI - GitHub.
- Programming – part-time lecturer, set up and pioneered the use of Autolab for this course (2015, 2016)
- Supervised two MSc thesis students in Econometrics, among whom:
- G. van Rooij, Clustering Stores of Retailers via Consumer Behavior, 2017.
- Supervised four BSc thesis students in Econometrics, among whom:
- L.W. Hoogenboom, Recommender System Optimization through Collaborative Filtering, 2016.
- E.L.J. Mathol, Neighborhood-based Collaborative Filtering: Providing the best recommendations, 2016.
- M.L. Jongsma, Categorised Neighborhood-based Collaborative Filtering, 2016.