Pro získání zápočtu je nutné splnit následující body:
=========== 1.12. 1) [Smajzr Michal] LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 2) Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity https://arxiv.org/abs/2101.03961 3) [Litschmann Jakub] Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) https://arxiv.org/abs/1711.11279 4) [Trhlíková Natalie] Adversarial Examples Are Not Bugs, They Are Features https://arxiv.org/abs/1905.02175 5) [Juránková Anita] Language Models are Multilingual Chain-of-Thought Reasoners https://arxiv.org/abs/2210.03057 6) [Votočka David] The Ethics of Artificial Intelligence https://nickbostrom.com/ethics/artificial-intelligence.pdf 7) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805 8) [Loučka Richard Bohuslav] Logical Neural Networks https://arxiv.org/abs/2006.13155 9) [Škrabalová Eliška] Neural Logic Networks https://arxiv.org/abs/1910.08629 10) [Hrdina Filip] Learning Algorithms via Neural Logic Networks https://arxiv.org/abs/1904.01554 11) A Survey of the State of Explainable AI for Natural Language Processing https://arxiv.org/abs/2010.00711 12) [Kudělka Tomáš] Differentiable Logics for Neural Network Training and Verification https://arxiv.org/abs/2207.06741 13) [Lakomý Jan] The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research https://arxiv.org/abs/2010.15581 14) Abduction and Argumentation for Explainable Machine Learning: A Position Survey https://arxiv.org/abs/2010.12896 15) [Kvapil Jiří] Neural Logic Analogy Learning https://arxiv.org/abs/2202.02436 =========== 8.12. 16) [Rástocká Alžbeta] Neural Symbolic Logical Rule Learner for Interpretable Learning https://arxiv.org/abs/2408.11918 17) [Pryč Jan] Logic Gate Neural Networks are Good for Verification https://arxiv.org/abs/2505.19932 18) Categorical Construction of Logically Verifiable Neural Architectures https://arxiv.org/abs/2508.11647 19) [Pastorek Mojmír] Standard Neural Computation Alone Is Insufficient for Logical Intelligence https://arxiv.org/abs/2502.02135 20) [Podmanický Martin] AI-Driven Automation Can Become the Foundation of Next-Era Science of Science Research https://arxiv.org/abs/2505.12039 21) [Vincour Radek] AI Agents: Evolution, Architecture, and Real-World Applications https://arxiv.org/abs/2503.12687 22) Towards Self-Regulating AI: Challenges and Opportunities of AI Model Governance in Financial Services https://arxiv.org/abs/2010.04827 23) [Romančíková Paulína] Generative Pretraining from Pixels https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf 24) Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165 25) [Kárný Tomáš] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale https://arxiv.org/abs/2010.11929 26) [Široký Michael] Swin Transformer: Hierarchical Vision Transformer using Shifted Windows https://arxiv.org/abs/2103.14030 27) [Davies Tomáš] Zero-Shot Text-to-Image Generation https://arxiv.org/abs/2102.12092 28) Learning Transferable Visual Models From Natural Language Supervision https://arxiv.org/abs/2103.00020 29) [Berger Thomas] High-Resolution Image Synthesis with Latent Diffusion Models https://arxiv.org/abs/2112.10752 =========== 15.12. 30) BLOOM: A 176B-Parameter Open-Access Multilingual Language Model https://arxiv.org/abs/2211.05100 31) [Kalenda Martin] Reinforcement Learning from Human Feedback https://arxiv.org/abs/2504.12501 32) [Kladivová Vendula] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks https://arxiv.org/abs/1703.03400 33) [Kašparová Sofie] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks https://arxiv.org/abs/1803.03635 34) Neural Tangent Kernel: Convergence and Generalization in Neural Networks https://arxiv.org/abs/1806.07572 35) [Kercl Aleš] Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model https://arxiv.org/abs/1911.08265 36) [Kubatka Patrik] Deep Double Descent: Where Bigger Models and More Data Hurt https://arxiv.org/abs/1912.02292 37) [Smékal Samuel] Graph Neural Networks: A Review of Methods and Applications https://arxiv.org/abs/1812.08434 38) [Malíček Filip] Prototypical Networks for Few-shot Learning https://arxiv.org/abs/1703.05175 39) [Čapka Tomáš] Reinforcement Learning with Unsupervised Auxiliary Tasks https://arxiv.org/abs/1611.05397 40) [Duongová My Linh] FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness https://arxiv.org/abs/2205.14135 41) [Netrh Vojtěch] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks https://arxiv.org/abs/2005.11401 42] [Machala Jan] Neural-network quantum state tomography https://www.nature.com/articles/s41567-018-0048-5