Konečný - Výuka - KMI/UMIN Umělá inteligence


Rozvrh předmětu

Přednáška: pondělí: 13:15 - 14:45
Cvičení: pondělí: 15:00 - 15:45

Výukové materiály

Studijní materiály

Cvičení

https://github.com/konyconi/UMIN_AI_exercises/

Témata referátů

1) [Smajzr Michal]
LoRA: Low-Rank Adaptation of Large Language Models

https://arxiv.org/abs/2106.09685

2)
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity

https://arxiv.org/abs/2101.03961

3) [Litschmann Jakub] 
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

https://arxiv.org/abs/1711.11279

4) [Natalie Trhlíkova]
Adversarial Examples Are Not Bugs, They Are Features

https://arxiv.org/abs/1905.02175

5) [Juránková Anita]
Language Models are Multilingual Chain-of-Thought Reasoners

https://arxiv.org/abs/2210.03057

6) [Votočka David]
The Ethics of Artificial Intelligence

https://nickbostrom.com/ethics/artificial-intelligence.pdf

7)
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

https://arxiv.org/abs/1810.04805

8) [Loučka Richard Bohuslav]
Logical Neural Networks

https://arxiv.org/abs/2006.13155

9) [Škrabalová Eliška]
Neural Logic Networks

https://arxiv.org/abs/1910.08629

10) [Hrdina Filip]
Learning Algorithms via Neural Logic Networks

https://arxiv.org/abs/1904.01554

11)
A Survey of the State of Explainable AI for Natural Language Processing

https://arxiv.org/abs/2010.00711

12) [Tomáš Kudělka]
Differentiable Logics for Neural Network Training and Verification

https://arxiv.org/abs/2207.06741

13) [Jan Lakomý]
The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research

https://arxiv.org/abs/2010.15581

14)
Abduction and Argumentation for Explainable Machine Learning: A Position Survey

https://arxiv.org/abs/2010.12896

15) [Jiří Kvapil]
Neural Logic Analogy Learning

https://arxiv.org/abs/2202.02436

16) [Alžbeta Rástocká]
Neural Symbolic Logical Rule Learner for Interpretable Learning

https://arxiv.org/abs/2408.11918

17)
Logic Gate Neural Networks are Good for Verification

https://arxiv.org/abs/2505.19932

18)
Categorical Construction of Logically Verifiable Neural Architectures

https://arxiv.org/abs/2508.11647

19) [Pastorek Mojmír]
Standard Neural Computation Alone Is Insufficient for Logical Intelligence

https://arxiv.org/abs/2502.02135

20) [Martin Podmanický]
AI-Driven Automation Can Become the Foundation of Next-Era Science of Science Research

https://arxiv.org/abs/2505.12039

21) [Vincour Radek]
AI Agents: Evolution, Architecture, and Real-World Applications

https://arxiv.org/abs/2503.12687

22)
Towards Self-Regulating AI: Challenges and Opportunities of AI Model Governance in Financial Services

https://arxiv.org/abs/2010.04827

23) [Romančíková Paulína]
Generative Pretraining from Pixels

https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf

24) 
Language Models are Few-Shot Learners

https://arxiv.org/abs/2005.14165

25) [Kárný Tomáš]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

https://arxiv.org/abs/2010.11929

26) [Michael Široký]
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

https://arxiv.org/abs/2103.14030

27) [Davies Tomáš]
Zero-Shot Text-to-Image Generation

https://arxiv.org/abs/2102.12092

28)
Learning Transferable Visual Models From Natural Language Supervision

https://arxiv.org/abs/2103.00020

29) [Thomas Berger]
High-Resolution Image Synthesis with Latent Diffusion Models

https://arxiv.org/abs/2112.10752

30)
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

https://arxiv.org/abs/2211.05100

31) [Kalenda Martin]
Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501

32)
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

https://arxiv.org/abs/1703.03400

33) [Kašparová Sofie]
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

https://arxiv.org/abs/1803.03635

34)
Neural Tangent Kernel: Convergence and Generalization in Neural Networks

https://arxiv.org/abs/1806.07572

35) [Kercl Aleš]
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

https://arxiv.org/abs/1911.08265

36) [Patrik Kubatka]
Deep Double Descent: Where Bigger Models and More Data Hurt

https://arxiv.org/abs/1912.02292

37) [Smékal Samuel]
Graph Neural Networks: A Review of Methods and Applications

https://arxiv.org/abs/1812.08434

38) [Malíček Filip]
Prototypical Networks for Few-shot Learning

https://arxiv.org/abs/1703.05175

39) [Čapka Tomáš]
Reinforcement Learning with Unsupervised Auxiliary Tasks

https://arxiv.org/abs/1611.05397

40)
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness

https://arxiv.org/abs/2205.14135

41) [Vojtěch Netrh]
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

https://arxiv.org/abs/2005.11401

Výukové materiály (z 2024)