Semi-Supervised Learning, Explained with Examples

As it sometimes happens, when one approach doesn’t work to solve a problem, you try a different one. When that approach doesn’t work either, it may be a good idea to combine the best parts of both. At least that’s often the case with technology tasks. And machine learning is no exception. You’ve probably heard of the two main ML techniques — supervised and unsupervised learning. The marriage of both those technologies gave birth to the happy medium known as semi-supervised learning.

In this article, we’ll dive into the term, explain how this ML process works, and what issues associated with the other two ML types it solves. While we'are at it, we'll also review a few real-life semi-supervised examples.

What is semi-supervised learning?

In a nutshell, semi-supervised learning (SSL) is a machine learning technique that uses a small portion of labeled data and lots of unlabeled data to train a predictive model.

To better understand the SSL concept, we should look at it through the prism of its two main counterparts.

Supervised vs unsupervised vs semi-supervised machine learning in a nutshell

Supervised vs unsupervised vs semi-supervised machine learning in a nutshell

Supervised learning is training a machine learning model using the labeled dataset. Organic labels are often available in data, but the process may involve a human expert that adds tags to raw data to show a model the target attributes (answers). In simple terms, a label is basically a description showing a model what it is expected to predict.

Supervised learning has a few limitations. This process is
  • slow (it requires human experts to manually label training examples one by one) and
  • costly (a model should be trained on the large volumes of hand-labeled data to provide accurate predictions).
Unsupervised learning, on the other hand, is when a model tries to mine hidden patterns, differences, and similarities in unlabeled data by itself, without human supervision. Hence the name. Within this method, data points are grouped into clusters based on similarities.

While unsupervised learning is a cheaper way to perform training tasks, it isn’t a silver bullet. Commonly, the scenario
  • has a limited area of applications (mostly for clustering purposes) and
  • provides less accurate results.
Semi-supervised learning bridges supervised learning and unsupervised learning techniques to solve their key challenges. With it, you train an initial model on a few labeled samples and then iteratively apply it to the greater number of unlabeled data.
  • Unlike unsupervised learning, SSL works for a variety of problems from classification and regression to clustering and association.
  • Unlike supervised learning, the method uses small amounts of labeled data and also large amounts of unlabeled data, which reduces expenses on manual annotation and cuts data preparation time.
Speaking of supervised learning, we have an informed 14-min video explaining how data is prepared for it. Make sure you check it out.

https://www.youtube.com/watch?v=P8ERBy91Y90&ab_channel=AltexSoft

Since unlabeled data is abundant, easy to get, and cheap, semi-supervised learning finds many applications, while the accuracy of results doesn’t suffer.

Let’s look at one of the real-world scenarios like fraud detection. Say, a company with 10 million users analyzed five percent of all transactions to classify them as fraudulent or not while the rest of the data wasn’t labeled with “fraud” and “non-fraud” tags. In this case, semi-supervised learning allows for running all of the information without having to hire an army of annotators or sacrifice accuracy. Below, we’ll explain how exactly this magic works.

How semi-supervised learning works

Imagine, you have collected a large set of unlabeled data that you want to train a model on. Manual labeling of all this information will probably cost you a fortune, besides taking months to complete the annotations. That’s when the semi-supervised machine learning method comes to the rescue.

The working principle is quite simple. Instead of adding tags to the entire dataset, you go through and hand-label just a small part of the data and use it to train a model, which then is applied to the ocean of unlabeled data.

Self-training

One of the simplest examples of semi-supervised learning, in general, is self-training.

Self-training is the procedure in which you can take any supervised method for classification or regression and modify it to work in a semi-supervised manner, taking advantage of labeled and unlabeled data. The standard workflow is as follows.

Semi-supervised self-training method

Semi-supervised self-training method



  • You pick a small amount of labeled data, e.g., images showing cats and dogs with their respective tags, and you use this dataset to train a base model with the help of ordinary supervised methods.
  • Then you apply the process known as pseudo-labeling — when you take the partially trained model and use it to make predictions for the rest of the database which is yet unlabeled. The labels generated thereafter are called pseudo as they are produced based on the originally labeled data that has limitations (say, there may be an uneven representation of classes in the set resulting in bias — more dogs than cats).
  • From this point, you take the most confident predictions made with your model (for example, you want the confidence of over 80 percent that a certain image shows a cat, not a dog). If any of the pseudo-labels exceed this confidence level, you add them into the labeled dataset and create a new, combined input to train an improved model.
  • The process can go through several iterations (10 is often a standard amount) with more and more pseudo-labels being added every time. Provided the data is suitable for the process, the performance of the model will keep increasing at each iteration.
While there are successful examples of self-training being used, it should be stressed that the performance may vary a lot from one dataset to another. And there are plenty of cases when self-training may decrease the performance compared to taking the supervised route.

Co-training

Derived from the self-training approach and being its improved version, co-training is another semi-supervised learning technique used when only a small portion of labeled data is available. Unlike the typical process, co-training trains two individual classifiers based on two views of data.

The views are basically different sets of features that provide additional information about each instance, meaning they are independent given the class. Also, each view is sufficient — the class of sample data can be accurately predicted from each set of features alone.

The original co-training research paper claims that the approach can be successfully used, for example, for web content classification tasks. The description of each web page can be divided into two views: one with words occurring on that page and the other with anchor words in the link leading to it.

Semi-supervised co-training method

Semi-supervised co-training method

So, here is how co-training works in simple terms.
  • First, you train a separate classifier (model) for each view with the help of a small amount of labeled data.
  • Then the bigger pool of unlabeled data is added to receive pseudo-labels.
  • Classifiers co-train one another using pseudo-labels with the highest confidence level. If the first classifier confidently predicts the genuine label for a data sample while the other one makes a prediction error, then the data with the confident pseudo-labels assigned by the first classifier updates the second classifier and vice-versa.
  • The final step involves the combining of the predictions from the two updated classifiers to get one classification result.
As with self-training, co-training goes through many iterations to construct an additional training labeled dataset from the vast amounts of unlabeled data.

SSL with graph-based label propagation

A popular way to run SSL is to represent labeled and unlabeled data in the form of graphs and then apply a label propagation algorithm. It spreads human-made annotations through the whole data network.

A typical example of label propagation

A typical example of label propagation

If you look at the graph, you will see a network of data points, most of which are unlabeled with four carrying labels (two red points and two green points to represent different classes). The task is to spread these colored labels throughout the network. One way of doing this is you pick, say, point 4, and count up all the different paths that travel through the network from 4 to each colored node. If you do that, you will find that there are five walks leading to red points and only four walks leading to green ones. From that, we can assume that point 4 belongs to the red category. And then you will repeat this process for every point on the graph.

The practical use of this method can be seen in personalization and recommender systems. With label propagation, you can predict customer interests based on the information about other customers. Here, we can apply the variation of continuity assumption — if two people are connected on social media, for example, it’s highly likely that they will share similar interests.

Semi-supervised learning examples

With the amount of data constantly growing by leaps and bounds, there’s no way for it to be labeled in a timely fashion. Think of an active TikTok user that uploads up to 20 videos per day on average. And there are 1 billion active users. In such a scenario, semi-supervised learning can boast of a wide array of use cases from image and speech recognition to web content and text document classification.

Speech recognition

Labeling audio is a very resource- and time-intensive task, so semi-supervised learning can be used to overcome the challenges and provide better performance. Facebook (now Meta) has successfully applied semi-supervised learning (namely the self-training method) to its speech recognition models and improved them. They started off with the base model that was trained with 100 hours of human-annotated audio data. Then 500 hours of unlabeled speech data was added and self-training was used to increase the performance of the models. As far as the results, the word error rate (WER) decreased by 33.9 percent, which is a significant improvement.

Web content classification

With billions of websites presenting all sorts of content out there, classification would take a huge team of human resources to organize information on web pages by adding corresponding labels. The variations of semi-supervised learning are used to annotate web content and classify it accordingly to improve user experience. Many search engines, including Google, apply SSL to their ranking component to better understand human language and the relevance of candidate search results to queries. With SSL, Google Search finds content that is most relevant to a particular user query.

Text document classification

Another example of when semi-supervised learning can be used successfully is in the building of a text document classifier. Here, the method is effective because it is really difficult for human annotators to read through multiple word-heavy texts to assign a basic label, like a type or genre.

For example, a classifier can be built on top of deep learning neural networks like LSTM (long short-term memory) networks that are capable of finding long-term dependencies in data and retraining past information over time. Usually, training a neural net requires lots of data with and without labels. A semi-supervised learning framework works just fine as you can train a base LSTM model on a few text examples with hand-labeled most relevant words and then apply it to a bigger number of unlabeled samples.

The SALnet text classifier made by researchers from Yonsei University in Seoul, South Korea, demonstrates the effectiveness of the SSL method for tasks like sentiment analysis.

When to use and not use semi-supervised learning

With a minimal amount of labeled data and plenty of unlabeled data, semi-supervised learning shows promising results in classification tasks while leaving the doors open for other ML tasks. Basically, the approach can make use of pretty much any supervised algorithm with some modifications needed. On top of that, SSL fits well for clustering and anomaly detection purposes too if the data fits the profile. While a relatively new field, semi-supervised learning has already proved to be effective in many areas.

But it doesn’t mean that semi-supervised learning is applicable to all tasks. If the portion of labeled data isn’t representative of the entire distribution, the approach may fall short. Say, you need to classify images of colored objects that have different looks from different angles. Unless you have a large amount of labeled data, the results will have poor accuracy. But if we're talking about lots of labeled data, then semi-supervised learning isn’t the way to go. Like it or not, many real-life applications still need lots of labeled data, so supervised learning won’t go anywhere in the near future.

Comments1

Sort by 
Subscribe to our newsletter

Stay tuned to the latest industry updates.

By clicking subscribe you confirm, that you understand and agree to the Privacy Policy

Latest Data Science Articles

Join us on the TechTalks

Discover new opportunities for your travel business, ask about the integration of certain technology, and of course - help others by sharing your experience.

Visit TechTalks
Write an article for our blog

Almost 50 guest articles published from such contributors as Amadeus, DataQuest, MobileMonkey, and CloudFactory.

Read how to become a contributor.

Any Questions? Let's Discuss!

Discuss your project needs with our architects.

By clicking contact us you confirm, that you understand and agree to the Privacy Policy

Our website uses cookies to ensure you get the best experience. By browsing the website you agree to our use of cookies. Please note, we do not collect sensitive data and child data. See Privacy Policy