Chadrickpaper summary: “LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking”arxiv: https://arxiv.org/abs/2204.08387Jun 13, 2022Jun 13, 2022
Chadrickpaper review: “Large Language Models are Zero-Shot Reasoners”arxiv: https://arxiv.org/abs/2205.11916May 29, 2022May 29, 2022
Chadrickpaper review: “VOS: LEARNING WHAT YOU DON’T KNOW BY VIRTUAL OUTLIER SYNTHESIS”arxiv: https://arxiv.org/abs/2202.01197Apr 14, 2022Apr 14, 2022
ChadrickPaper Review: “Donut : Document Understanding Transformer without OCR”arxiv: https://arxiv.org/abs/2111.15664Jan 15, 2022Jan 15, 2022
Chadrickpaper summary: “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation…arxiv: https://arxiv.org/abs/1910.13461Jan 11, 20221Jan 11, 20221
Chadrickpaper review: “DocFormer: End-to-End Transformer for Document Understanding”arxiv: https://arxiv.org/abs/2106.11539Nov 23, 2021Nov 23, 2021
Chadrickpaper summary: “LayoutLMV2: Multi-Modal Pre-training for Visually-Rich Document Understanding”arxiv: https://arxiv.org/abs/2012.14740Nov 19, 20212Nov 19, 20212
Chadrickpaper summary: “BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key…arxiv: https://arxiv.org/abs/2108.04539Nov 10, 2021Nov 10, 2021
ChadrickinNerd For Techpaper summary “Perceiver IO: A General Architecture for Structured Inputs & Outputs”arxiv: https://arxiv.org/abs/2107.14795Sep 27, 2021Sep 27, 2021
ChadrickinNerd For Techpaper summary: Swin Transformer: Hierarchical Vision Transformer using Shifted Windowsarxiv: https://arxiv.org/abs/2103.14030Sep 10, 2021Sep 10, 2021