
Red Hat OpenShift and KubeFlow ML Pipelines with Tekton
Description:
🎓 What will you learn?
In this workshop, you will get hands-on experience on Kubeflow pipelines are reusable end-to-end ML workflows built using the Kubeflow Pipelines SDK, to orchestrate Kubernetes resources. By pipeline, all the ML lifecycle (such as data pre-process, hyperparameter tuning, model training and model inference) can be managed and reused.
The Kubeflow Pipelines SDK allows data scientists to define end-to-end machine learning and data pipelines. The output of the Kubeflow Pipelines SDK compiler is YAML for Argo.
The kfp-tekton SDK is extending the Compiler and the Client of the Kubeflow Pipelines SDK to generate Tekton YAML and to subsequently upload and run the pipeline with the Kubeflow Pipelines engine backed by Tekton.
🌟 Session outcomes
- Learn different ways to compile, upload, and execute Kubeflow Pipelines with Tekton backend
- Features supported by the KFP-Tekton compiler.
🎓 Agenda
- Create your pipeline using Kubeflow Pipelines DSL and compile it to Tekton YAML.
- Upload the compiled Tekton YAML to the Kubeflow Pipeline engine (API and UI)
- Run end-to-end with logging and artifacts tracking enabled.-
- Hands-on lab
👩💻 Who should attend
- Data Scientists
- Developers/Administrators who are interested in Red Hat OpenShift and CI/CD Pipelines
👩💻 Prerequisites
- Sign up for IBM Cloud account: https://ibm.biz/Bdq7EJ
or
- Basic understanding of ML and OpenShift
🍪 How to attend
- Join on Crowdcast: https://www.crowdcast.io/e/red-hat-openshift-and
🎙️ Speakers
- Karim Deif, Client Developer Advocate, IBM @DeifKarim
- Sbusiso Mkhombe, Developer Advocate, IBM @Sbusiso_Mkhombe
__________________________________________________________
By registering for this event, you acknowledge this video will be recorded and consent for it to be featured on IBM media platforms and pages.