-
Learning by doing
-
Trainers with practical experience
-
Classroom training
-
Detailed course material
-
Clear content description
-
Tailormade content possible
-
Training that proceeds
-
Small groups
Architecture, training, optimization, and deployment of large language models.
Course Overview
This 2-day hands-on course provides an in-depth understanding of GitLab CI/CD, a powerful tool for automating software builds, testing, and deployments. Participants will learn how to set up, configure, and optimize CI/CD pipelines in GitLab to enhance software delivery efficiency.
By the end of the course, participants will be able to:
• Understand GitLab CI/CD architecture and concepts
• Create CI/CD pipelines using .gitlab-ci.yml
• Implement continuous integration and delivery best practices
• Automate builds, testing, and deployments
• Integrate GitLab CI/CD with Docker, Kubernetes, and cloud platforms
• Implement security and compliance checks in pipelines
The course GitLab CI/CD is intended for DevOps engineers, Software Developers and QA engineers who want to learn pipelining with GitLab.
To participate in the course, basic knowledge of Git, version control and software workflows is required. Familiarity with containers is useful.
The course is conducted under guidance of the trainer and theory and practice are interchanged. Real world case studies are used for explanations.
After successfully completing the course, participants will receive a certificate of participation in GitLab CI/CD.
Module 1: Introduction to LLMs • What are LLMs? • The Transformer architecture • Training objectives (causal, masked) • Evolution of LLMs (GPT, BERT, T5) • Open-source vs proprietary LLMs • Tokenization and vocabulary • Attention mechanism • Model scaling laws • Transfer learning • Pretraining vs fine-tuning Module 2: Model Architectures and Frameworks • Decoder-only vs encoder-decoder models • GPT, LLaMA, T5, and PaLM • Training pipeline overview • Optimizers (Adam, Adafactor) • Precision (FP32, FP16, quantization) • Frameworks: Transformers (HF), Megatron, Deepspeed • Parameter tuning vs instruction tuning • LoRA and QLoRA • In-context learning • RLHF (Reinforcement Learning with Human Feedback) Module 3: Training and Fine-tuning LLMs • Dataset creation and curation • Tokenizer customization • Data preprocessing • Fine-tuning with Hugging Face • SFT (Supervised Fine-Tuning) • Adapters and LoRA • Evaluation metrics • Avoiding overfitting • Model alignment • Model evaluation and benchmarking Module 4: LLM Deployment and Scaling • Inference optimization • Model distillation • Quantization techniques • Hosting on cloud (AWS, GCP, Azure) • Using model gateways (Replicate, Hugging Face) • LangChain and semantic search • Vector stores and embeddings • Caching responses • Load balancing • Cost optimization strategies Module 5: Safety, Bias, and Ethics • Understanding model biases • Mitigation strategies • Model auditing • Adversarial prompts • User privacy • Filtering and moderation • Red teaming • Explainability in LLMs • Interpreting outputs • Regulatory and legal issues Module 6: LLM Use Cases and Ecosystem • Coding assistants • AI for legal and finance • Education and learning • Health care and biotech • Chatbots and agents • RAG systems • Tool use and plugins • Enterprise use of LLMs • Evaluating new models • Future directions in LLM research |