Speaker
Description
Once again, the last several years reshaped the state-of-the-art in Computer Vision (CV). Non-convolutional approaches, such as Vision Transformers (ViT) and self-attention multi-layer perceptrons (SA-MLP), are quickly emerging, combined with novel optimization techniques and pre-training methods. Note that ViTs and SA-MLPs are evidently better at incorporating global information about the input data, they're also not spatially invariant, which is more appropriate for the cosmic-ray air-showers detectors. This contribution covers multiple approaches for the unsupervised pre-training - a technique that allows making model learn on the unlabeled (i.e., experimental) data and thus increases the model performance. However, each of the examined approaches is nontrivial to apply to air-showers, which poses a challenge yet to be solved.
Type of Contribution | talk |
---|