Computer-Science

Advanced Topics

Semi-Supervised Learning

적은 μ–‘μ˜ Labled data와 λ§Žμ€ μ–‘μ˜ Unlabeled data둜 ν•™μŠ΅μ‹œν‚€λŠ” technique

Generative Models

Graph Based

Graph Neural Networks (GNNs)

Pass massages between pairs of nodes and agglomerate

Self-Training (Pseudo-Labeling)

Self-Training

  1. Train the model with labeled data
  2. Predict the unlabeled data using the trained model
  3. Add most confident (unlabeled) sample to the labeled data
  4. Repeat 1-3

Consistency Regularization

e.g., Pi-Model, Mean Teacher, …

  1. Train the model with labeled data
  2. Predict the unlabeled data using the trained model
  3. Add noise to the unlabeled data (i.e., data augmentation)
  4. Train the model with the augmented unlabeled data where the ground truth is the predicted labels from 2.
  5. Repeat 1-4

Self-Supervised Learning

Why Self-Supervised Learning?

Goal of Self-Supervised Learning

Denoising Autoencoder

Predict Misssing Pieces: Context Encoders

Relative Position of Image Patches

Jigsaw Puzzles

Rotation Prediction

S4L: Self-Supervised Semi-Supervised Learning

Domain Adaptation (DA)

Two baseline DA approaches

  1. Domain-invariant feature learning
  2. Pixel-level DA using GAN

Knowledge Transfer

Transfer Learning

Knowledge Distillation

Knowledge distillation is a process of distilling or transferring the knowledge from a large model(s) to a lighter, easier-to-deploy single model, without significant loss in performance.

References

  1. 인곡지λŠ₯ μ‘μš© (ICE4104), μΈν•˜λŒ€ν•™κ΅ 정보톡신곡학과 홍성은 κ΅μˆ˜λ‹˜