Machine Learning: A Primer
Speaker: Achim Koh
CUNY Graduate Center
Today, many critics push to make our algorithmic world transparent, to remove the cover on “black box” machines and expose and deconstruct the systems by which these computers run. The scope of this feat, itself, appears daunting, algorithms lain in dense webs and expressed in arcane and protected jargon, yet the task becomes absurd when considering that many of these programs are not analyzable, dead, hard-coded, but organic, dynamic processes, executed by similar forces to what make human learning, and the agency that arises, so inscrutable.
“Machine Learning: A Primer,” conducted by Achim Koh at CUNY’s Graduate Center, introduced those that attended to not only the principles of machine learning (ML), but the issues that arise from the process. The talk split evenly in half, with the first hour devoted to the nature and ontology of machine learning, and the latter hour morphing into a discussion of its problems.
Koh introduced the topic in comparison to more traditional algorithmic methods, where programmers code a certain set of functions to perform intended manipulations on some inputs, instead machine learners will train to infer output from data sources without the need for programmers to structure the functions themselves, allowing for statistical methods and computationally accelerated trial-and-error to come up with models that best fit datasets that may be too great for hard-coded formulations themselves. To illustrate this point, Koh showed a guide (found at: https://jalammar.github.io/visual-interactive-guide-basics-neural-networks/) where the viewer can discover the value of machine learning funamentals through interaction with a small dataset, line of “best fit”, weights, and biases. Under “Train your Dragon,” a minimal neural network can be hand-tweaked to reduce error in a similar manner to how a machine learner might do itself.
If you find this difficult and tedious, just imagine the effort undertaken in finding a line of best fit on millions of datapoints across multiple dimensions. Hence the power of machine learning, it systematizes guesswork (continuous evaluation based on observation) to accomplish objectives previously thought impossible by more formal methods.
Thus, the goals of ML are threefold: pattern recognition, prediction, and decision-making, and the field is rife with terms to express these ends. Koh explained some of the lingo: the machine learns on “vectors,” or specific lines of data; “optimization” is the process by which machine increases “precision”; this optimization is completed by “training,” enacted with the hope that the machine models the intended subset of reality with “accuracy”.
Nevertheless, precision does not mean accuracy, as “overfitting” can arise where a machine matches datapoints so well that the eventual computer model represents the data and the noise present in the data, which contrarily increases in false negatives and positives.
This “confusion matrix” shows the possible predictive outcomes matched with the actual state of events in the world. The machine learner is in “error” when it “classifies” falsely, which leads to these false negatives and positives, “classification” being when real world data is taken and processed by the algorithm that trained on a prior (training) set of data.
The audience of the talk then learned about the types of machine learning, broken up most generally between “supervised” and “unsupervised” learning. I must admit, however, this distinction was unclearly presented. Supervised learning seems to deal “targets,” where the machine seeks a function from input data to predetermined outcome, whereas unsupervised learning deals with pattern recognition of relationships between data points and clusters them accordingly. Other concepts such as “neural networks”, “deep learning”, and “reinforcement learning” were also briefly mentioned.
Here, we glimpse the sense growth in the field of machine learning, and wonder, not only about its possibility, but its actuality, concealed by the facades of our present systems. The applications of machine learning are many and varied, and Koh mentioned “natural language processing,” “computer vision,” “dynamic pricing,” “recommendation and content curation,” “robotics,” and “art” as the highlights. Of special interest, the art of machine learning allows us to gain a visual, aesthetic sense of the processes entailed by the technology, and Koh presented a number of sites that exhibit these projects (sources below).
With such advancement in machine learning techniques, questions should arise as to their use. Machine learners are far from infallible. As with overfitting, the rote nature of algorithmic learning creates problems when dealing with heterogenic or complex data, in which minority classes in datasets might appear aberrant and mishandled by the program. This problem worsens when considering that we cannot account for these biases outright, as machine learning is opaque in its means. After a machine has trained off a dataset, programmers can only tweak parameters and adjust tolerances to get fewer false negative or positives, but never both at the same time. Artificial adjustment skews the entire system; a decrease in likelihood for false positives might increase the occurrence of false negatives, and vice versa.
These issues appear in reality, extant in the penal system (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing). Computer learning algorithms exist in our justice system, assessing risk of re-offense by incarcerated individuals, across a number of factors (race not explicitly included). The results are staggering.
The study done by ProPublica finds that the algorithm assigns an overwhelming majority of inaccurate higher risk to the the black defendant population, in relation to the white counterpart. When risk scores see such wide use in all levels of the justice process, one would hope that these computational methods would help to alleviate bias, not hide it. This study evidences the contrary.
As digital humanists, we have the obligation to see how the digital age affects its participants. The pervasiveness and opacity of machine learning requires further study and critique, and thus primers like the one run by Achim Koh on this discipline of Artificial Intelligence are both welcomed and necessary.
Machine learning algorithms make digital life easier, as we can leverage computing on those most mundane yet awesome tasks, like analyzing a near continuous dataset to search for otherwise obscure patterns. The output are monumental accomplishments such as handwriting recognition and near-universal translation. Nevertheless, these neat displays belie the mess and diversity of the input data, and we wonder the what and how and why of what is left behind. I hope that machine learning improves and continues to provide innovative and unique solutions to today’s problems, and I hope that critics continue to challenge its growth.