LLMOps

Learn LLMOps best practices as you design and automate the steps to tune an LLM for a specific task and deploy it as a callable API.

Description

What you’ll learn in this course

In this course, you’ll go through the LLMOps pipeline of pre-processing training data for supervised instruction tuning, and adapt a supervised tuning pipeline to train and deploy a custom LLM. This is useful in creating an LLM workflow for your specific application. For example, creating a question-answer chatbot tailored to answer Python coding questions, which you’ll do in this course.

Through the course, you’ll go through key steps of creating the LLMOps pipeline:

  • Retrieve and transform training data for supervised fine-tuning of an LLM.
  • Version your data and tuned models to track your tuning experiments.
  • Configure an open-source supervised tuning pipeline and then execute that pipeline to train and then deploy a tuned LLM.
  • Output and study safety scores to responsibly monitor and filter your LLM application’s behavior.
  • Try out the tuned and deployed LLM yourself in the classroom!
  • Tools you’ll practice with include BigQuery data warehouse, the open-source Kubeflow Pipelines, and Google Cloud.

What’s included