Course Large Language Models

Architecture, training, optimization, and deployment of large language models.

Region:
  • Content
  • Training
  • Modules
  • General
    General
  • Reviews
  • Certificate
  • Course Large Language Models : Content

    Course Overview

    This 2-day hands-on course provides an in-depth understanding of GitLab CI/CD, a powerful tool for automating software builds, testing, and deployments. Participants will learn how to set up, configure, and optimize CI/CD pipelines in GitLab to enhance software delivery efficiency.

    By the end of the course, participants will be able to:

    • Understand GitLab CI/CD architecture and concepts

    • Create CI/CD pipelines using .gitlab-ci.yml

    • Implement continuous integration and delivery best practices

    • Automate builds, testing, and deployments

    • Integrate GitLab CI/CD with Docker, Kubernetes, and cloud platforms

    • Implement security and compliance checks in pipelines

  • Course Large Language Models : Training

    Audience course Large Language Models

    The course Large Language Models is intended for software engineers, data scientists, and technical professionals who want to work with large language models (LLMs).

    Prerequisites Large Language Models Course

    To participate in the course, a basic understanding of Python and machine learning is required. Familiarity with neural networks or natural language processing is useful.

    Realization training Large Language Models

    The course is led by an experienced trainer and includes a mix of theory and hands-on exercises. Demonstrations and case studies involving LLMs are used to illustrate key concepts.

    Large Language Models Certificate

    After successfully completing the course, attendants receive a certificate of participation in the course Large Language Models.

    Course Large Language Models
  • Course Large Language Models : Modules

    Module 1: Intro to LLMs

    Module 2: Model Architectures

    Module 3: Training LLMs

    What are LLMs?
    Transformer architecture
    Training Objectives (causal, masked)
    Evolution of LLMs (GPT, BERT, T5)
    Open Source vs Proprietary LLMs
    Tokenization and Vocabulary
    Attention Mechanism
    Model Scaling Laws
    Transfer Learning
    Pretraining vs Fine-Tuning
    Decoder vs Encoder-Decoder Models
    GPT, LLaMA, T5, and PaLM
    Training Pipeline Overview
    Optimizers (Adam, Adafactor)
    Precision (FP32, FP16, quantization)
    Transformers (HF), Megatron, Deepspeed
    Parameter vs Instruction Suning
    LoRA and QLoRA
    In-context Learning
    Reinforcement Learning with HF
    Dataset Creation and Curation
    Tokenizer Customization
    Data Preprocessing
    Fine-Tuning with Hugging Face
    SFT (Supervised Fine-Tuning)
    Adapters and LoRA
    Evaluation Metrics
    Avoiding Overfitting
    Model Alignment
    Model Evaluation and Benchmarking

    Module 4: LLM Deployment

    Module 5: Safety and Bias

    Module 6: LLM Use Cases

    Inference Optimization
    Model Distillation
    Quantization Techniques
    Hosting on AWS, GCP, Azure
    Using Model Gateways
    LangChain and Semantic Search
    Vector Stores and Embeddings
    Caching Responses
    Load Balancing
    Cost Optimization Strategies
    Understanding Model Biases
    Mitigation Strategies
    Model Auditing
    Adversarial Prompts
    User Privacy
    Filtering and Moderation
    Red Teaming
    Explainability in LLMs
    Interpreting Outputs
    Regulatory and Legal Issues
    Coding Assistants
    AI for Legal and Finance
    Education and Learning
    Health Care and Biotech
    Chatbots and Agents
    RAG Systems
    Tool Use and Plugins
    Enterprise Use of LLMs
    Evaluating New Models
    Future Directions LLM Research
  • Course Large Language Models : General

    Read general course information
  • Course Large Language Models : Reviews

  • Course Large Language Models : Certificate